For a CMOS or CCD...

Thread Starter

Jennifer Solomon

Joined Mar 20, 2017
112
Strange question: Does anyone know, if one were to have the right tools to do so, could one make out (edit: recognize) a “mini image“ on the grid of a CMOS or CCD before it is digitized into system memory?
 
Last edited:

MisterBill2

Joined Jan 23, 2018
18,179
Usually when that happens it is a manufacturing defect.
Otherwise it would require one more mask and exposure during the manufacturing process, and so the cost is in the tens of thousands of dollars.
 

MisterBill2

Joined Jan 23, 2018
18,179
If the TS can provide a much more detailed description of what the goal is then they will get a much better answer than guesses. That includes my answer, which is based on the last cost I heard of for an IC photo-mask used in the fabrication of them.
So really, a much more detailed description of what is wanted will get a better answer.
 

Thread Starter

Jennifer Solomon

Joined Mar 20, 2017
112
If the TS can provide a much more detailed description of what the goal is then they will get a much better answer than guesses. That includes my answer, which is based on the last cost I heard of for an IC photo-mask used in the fabrication of them.
So really, a much more detailed description of what is wanted will get a better answer.
Thanks for the replies...

It's actually more just a request for information purposes on the actual function of the device with respect to how light carries the image to it and then sent to memory.

I've done some googling on it, and can't get a straight enough answer.

A 2D image of a 3D object in real-time is being "broadcast" through a lens as a wave, and then fragments into photons to "predictably" align "just the right pixels" in the right order to the pixelated sensor grid (correct? — or plane, whatever the proper term is) and somehow a software-driven, row-based scan is being sent to memory that represents that image which was "situated" on the grid.

Someone said you'd not be able to "see" an image on the grid, conceptually speaking, if you were able to somehow "zoom in" on it with the proper equipment. Is that true? In my mind, there has to be a discernible image on the grid (even if you can't see it with the naked eye), like a "mini-screen" before it is conferred to memory which is a "2D array" representation that can then be "seen" on a LED or LCD monitor?

Let me know if that's not clear...thanks for any info!
 
Last edited:

ronsimpson

Joined Oct 7, 2019
2,989
You have seen this as to how a camera works. The object is projected onto the film.
1598159970286.png
This is not a good picture but the film is removed and a light sensor is put in the same place. Yes if you could look inside either type of camera the image is on the film or sensor.
1598160095419.png
Here is a picture of a early sensor. It has millions of little cells in a "grid".
1598160269619.png
You can almost see the grid in this drawing. Each cell is a "pixel".
1598160513649.png
If you have more questions just ask.
 

jpanhalt

Joined Jan 18, 2008
11,087
The eye is not a fixed imaging surface. We scan. In sufficient light, only a small part of the retina, the fovea, is used for details and color. Anatomically, the total number of cones is less, but they are concentrated in that small area: https://www.ncbi.nlm.nih.gov/books/NBK10848/

Despite the fact that perception in typical daytime light levels is dominated by cone-mediated vision, the total number of rods in the human retina (91 million) far exceeds the number of cones (roughly 4.5 million). As a result, the density of rods is much greater than cones throughout most of the retina. However, this relationship changes dramatically in the fovea, a highly specialized region of the central retina that measures about 1.2 millimeters in diameter (Figure 11.11). In the fovea, cone density increases almost 200-fold, reaching, at its center, the highest receptor packing density anywhere in the retina.
 

MisterBill2

Joined Jan 23, 2018
18,179
Because the image on the sensor is "charge" and not light there would be nothing to see, unless somehow one could see charge. So the image is present and if somehow one was able to see the charge they would be able to observe it. That is, presuming that their vision had enough resolution, because the area is quite small, even on the multi-gigapixel camera sensors.
And I am much relieved that the TS was not hoping to print resident images on a camera sensor array.
The camera does have an array of pixel elements and each gets a charge proportional to the total amount of light energy falling within that pixel element. Thus there is an immediate quantization error introduced in areas with details because the grid is not fine enough to resolve the varying light intensity across the pixel. But that may or not matter, depending on the resolution required.
 

Thread Starter

Jennifer Solomon

Joined Mar 20, 2017
112
Thank you all for the excellent replies, including the great imagery. It is as I suspected, then—a mini 2D screen of charges that can be visually “transposed”, as we do, to the pixels of an LED or LCD.
 
Last edited:

Thread Starter

Jennifer Solomon

Joined Mar 20, 2017
112
Because the image on the sensor is "charge" and not light there would be nothing to see, unless somehow one could see charge. So the image is present and if somehow one was able to see the charge they would be able to observe it. That is, presuming that their vision had enough resolution, because the area is quite small, even on the multi-gigapixel camera sensors.
And I am much relieved that the TS was not hoping to print resident images on a camera sensor array.
The camera does have an array of pixel elements and each gets a charge proportional to the total amount of light energy falling within that pixel element. Thus there is an immediate quantization error introduced in areas with details because the grid is not fine enough to resolve the varying light intensity across the pixel. But that may or not matter, depending on the resolution required.
Yes, definitely not looking to print resident images. ;—)

The deeper end of my question was:

I am curious about where each photon is getting that specific “intensity information” from, that you mention above, if the photons are originally “manifested” from a single wave. It’s a wave from the top of the image of that pencil image above, then relayed to a lens, and then at some specific point the wave has to break into individual photon components, with each unique photon is responsible for activating a separate, discrete grid element, no?
 

MrChips

Joined Oct 2, 2009
30,720
Actually, it is a lot more complex than that.
I am sitting here looking at flowers, plants, trees, birds, etc. in a garden.
The sun shines on every object and sunlight is scattered off every part of the object.
I think of it as a zillion radio transmitters emitting RF signals all at different frequencies all at the same time.
The signals all blend together and pass through a tiny aperture, the iris of my eye. The lens is a Fourier transformer and it does magical things to all these signals. An inverted and laterally reversed image is formed on the retina at the back of my eyeball.
After that, it's way beyond me.
 

WBahn

Joined Mar 31, 2012
29,979
Thanks for the replies...

It's actually more just a request for information purposes on the actual function of the device with respect to how light carries the image to it and then sent to memory.

I've done some googling on it, and can't get a straight enough answer.

A 2D image of a 3D object in real-time is being "broadcast" through a lens as a wave, and then fragments into photons to "predictably" align "just the right pixels" in the right order to the pixelated sensor grid (correct? — or plane, whatever the proper term is) and somehow a software-driven, row-based scan is being sent to memory that represents that image which was "situated" on the grid.

Someone said you'd not be able to "see" an image on the grid, conceptually speaking, if you were able to somehow "zoom in" on it with the proper equipment. Is that true? In my mind, there has to be a discernible image on the grid (even if you can't see it with the naked eye), like a "mini-screen" before it is conferred to memory which is a "2D array" representation that can then be "seen" on a LED or LCD monitor?

Let me know if that's not clear...thanks for any info!
The image information is generally stored as a charge on some type of capacitor. In a CCD it is the gate capacitance of overlapping transistors and in a CMOS imager it is usually on a charge integration capacitor, which sometimes is just the parasitic capacitance of a certain node.
 

ronsimpson

Joined Oct 7, 2019
2,989
Because the image on the sensor is "charge" and not light there would be nothing to see
The image is light and it is focused on the sensor. I believe you can see it on the sensor. The sensor changes the light_image into a charge_image. So the top of the sensor(s) has a light image you can see and in or below the sensor the image is in electron form. (and later on the image is stored on a hard drive in magnetic form)
 

WBahn

Joined Mar 31, 2012
29,979
As the photons enter the sensor they produce electron-hole pairs that are swept through the imposed electric field and used to charge a node. There's really nothing to see visually. If you have the right equipment, you can reconstruct the image, to some degree, based on measuring the fields across the die face due to the charge distribution across the storage nodes.
 

Thread Starter

Jennifer Solomon

Joined Mar 20, 2017
112
Actually, it is a lot more complex than that.
I am sitting here looking at flowers, plants, trees, birds, etc. in a garden.
The sun shines on every object and sunlight is scattered off every part of the object.
I think of it as a zillion radio transmitters emitting RF signals all at different frequencies all at the same time.
The signals all blend together and pass through a tiny aperture, the iris of my eye. The lens is a Fourier transformer and it does magical things to all these signals. An inverted and laterally reversed image is formed on the retina at the back of my eyeball.
After that, it's way beyond me.
Magical definitely being the operative word there!

The "magic" I'm trying to pinpoint is in how the lens is dependably placing the right information at the right sections of the retina or other sensor. The wave becomes individual digital components at some point, and those individual components fire in geometric relationship that dependably makes sense in forming an image.
 

MrChips

Joined Oct 2, 2009
30,720
Actually, that "magic" part is reasonably well explained in optical physics.

Optical waves are dispersed in all directions from a single point source. When they encounter a medium of different refractive index such as the lens they bend towards a different direction in such a manner that they come to a focal point on the retina of the eye or the sensor in the camera.

In a pin-hole camera where there is no lens required, optical waves travel along straight lines. An image appears on the image plane wherever the plane is situated. There is no unique focal plane or focal distance.

Hence for the purpose of your inquiry, you can eliminate the lens and consider the case of a pin-hole camera which has a tiny aperture.
 
Top