Strange question: Does anyone know, if one were to have the right tools to do so, could one make out (edit: recognize) a “mini image“ on the grid of a CMOS or CCD before it is digitized into system memory?
Last edited:
More information please. Do you want a camera with a built in grid OR Do you want to add a grid to a stored image?could one make out a “mini image“ on the grid of a CMOS or CCD before it is digitized into system memory?
What, exactly, do you mean by "make out a 'mini image' on the grid"?Strange question: Does anyone know, if one were to have the right tools to do so, could one make out a “mini image“ on the grid of a CMOS or CCD before it is digitized into system memory?
Thanks for the replies...If the TS can provide a much more detailed description of what the goal is then they will get a much better answer than guesses. That includes my answer, which is based on the last cost I heard of for an IC photo-mask used in the fabrication of them.
So really, a much more detailed description of what is wanted will get a better answer.
Despite the fact that perception in typical daytime light levels is dominated by cone-mediated vision, the total number of rods in the human retina (91 million) far exceeds the number of cones (roughly 4.5 million). As a result, the density of rods is much greater than cones throughout most of the retina. However, this relationship changes dramatically in the fovea, a highly specialized region of the central retina that measures about 1.2 millimeters in diameter (Figure 11.11). In the fovea, cone density increases almost 200-fold, reaching, at its center, the highest receptor packing density anywhere in the retina.
Yes, definitely not looking to print resident images. ;—)Because the image on the sensor is "charge" and not light there would be nothing to see, unless somehow one could see charge. So the image is present and if somehow one was able to see the charge they would be able to observe it. That is, presuming that their vision had enough resolution, because the area is quite small, even on the multi-gigapixel camera sensors.
And I am much relieved that the TS was not hoping to print resident images on a camera sensor array.
The camera does have an array of pixel elements and each gets a charge proportional to the total amount of light energy falling within that pixel element. Thus there is an immediate quantization error introduced in areas with details because the grid is not fine enough to resolve the varying light intensity across the pixel. But that may or not matter, depending on the resolution required.
The image information is generally stored as a charge on some type of capacitor. In a CCD it is the gate capacitance of overlapping transistors and in a CMOS imager it is usually on a charge integration capacitor, which sometimes is just the parasitic capacitance of a certain node.Thanks for the replies...
It's actually more just a request for information purposes on the actual function of the device with respect to how light carries the image to it and then sent to memory.
I've done some googling on it, and can't get a straight enough answer.
A 2D image of a 3D object in real-time is being "broadcast" through a lens as a wave, and then fragments into photons to "predictably" align "just the right pixels" in the right order to the pixelated sensor grid (correct? — or plane, whatever the proper term is) and somehow a software-driven, row-based scan is being sent to memory that represents that image which was "situated" on the grid.
Someone said you'd not be able to "see" an image on the grid, conceptually speaking, if you were able to somehow "zoom in" on it with the proper equipment. Is that true? In my mind, there has to be a discernible image on the grid (even if you can't see it with the naked eye), like a "mini-screen" before it is conferred to memory which is a "2D array" representation that can then be "seen" on a LED or LCD monitor?
Let me know if that's not clear...thanks for any info!
The image is light and it is focused on the sensor. I believe you can see it on the sensor. The sensor changes the light_image into a charge_image. So the top of the sensor(s) has a light image you can see and in or below the sensor the image is in electron form. (and later on the image is stored on a hard drive in magnetic form)Because the image on the sensor is "charge" and not light there would be nothing to see
Magical definitely being the operative word there!Actually, it is a lot more complex than that.
I am sitting here looking at flowers, plants, trees, birds, etc. in a garden.
The sun shines on every object and sunlight is scattered off every part of the object.
I think of it as a zillion radio transmitters emitting RF signals all at different frequencies all at the same time.
The signals all blend together and pass through a tiny aperture, the iris of my eye. The lens is a Fourier transformer and it does magical things to all these signals. An inverted and laterally reversed image is formed on the retina at the back of my eyeball.
After that, it's way beyond me.
The answer is 42The "magic" I'm trying to pinpoint is in how the lens is dependably placing
Thread starter | Similar threads | Forum | Replies | Date |
---|---|---|---|---|
Help to design CMOS opamp | Homework Help | 9 | ||
H | Cmos circuit is automatically switched on when applying power | Digital Design | 72 | |
E | Purchase Of 4XXX Series CMOS Chips | Digital Design | 15 | |
H | Monte Carlo Simulation for a CMOS based circuit | PCB Layout , EDA & Simulations | 1 | |
CMOS vs DIP8 | General Electronics Chat | 14 |
by Duane Benson
by Aaron Carman
by Jake Hertz
by Duane Benson