Images are made with light captured from the world. We can consider (or “model”) light in many ways – as particles, waves, and so on. For photographers it’s most convenient to consider light as rays that travel in straight lines and bounce off of things. Simple enough.
Some rays may travel weakly, and some with great intensity. Consider this for brighter light sources and weaker ones. Light also gets weaker in brightness as it travels from a nearby light source.
As rays of light “travel”, a ray of light hitting an object for an instant and one that hits an object for a long time have different effects.
We need to absorb rays of light in order to capture an image. Lets consider the difficulty of focusing this light later, right now, think that if we don’t get enough light, we won’t be able to capture our image.
Remember those old glow-in-the-dark toys? They were greenish?
For one of these to be bright, they had to absorb enough light. If you didn’t leave the lights in the room on for long enough, they wouldn’t be able to glow; or they would glow very differently.
Picture an array of these stars, very densely packed together. If we shine a flashlight on them, but hold up shadow puppets in front of our light, then turn the light off, we will be left with an image – of sorts – of the shadow puppet. That’s…sort of… photography. Kinda. Making images by capturing light. Sure.
Film is made up of a shoot full of tiny little grains. A dense grid of them. These grains – similar to the phosphorescent stars – were sensitive to light. After we shined light at them, we then – through entirely different chemical processes than that with the phosphorescent rocks involving silver halide – we can “develop” the film, which makes the image that projected onto it permanent, no longer sensitive to light.
Then bunch of other things happen that we don’t need to talk about now, and we have an image.
In digital photography, we have image sensors made up of a bunch of little light sensors, each representing a pixel of an image.
In order for these little light sensors to show an image, we need to shoot enough rays of light at them, for a long enough period of time.
If we give them no light, it will be black. Like when you forget to take the lens cap off.
If you give it a lot of light, it will turn white.
Somewhere in-between, if we give it not too little, and not too much light, we can create an image that is grey. And one that is light grey. And one that is dark grey.
Get enough of these little light sensors reading various shades of grey, and you have an image! Images come out of contrasting elements. Dark next to light, and so on. Just like drawing. Blue ink can’t draw well blue paper. White pixels don’t show details next to other white pixels.
If they all were the same brightness of grey… we wouldn’t have an image. We wouldn’t if they were all black or if they were all white either.
Don’t worry. I’ve left my lens cap on enough times to thouroughly test the hypothethis that an all black image makes a good picture. It doesn’t. No need to test that yourself.
The key to getting an image to appear is that the bright parts of our image are not too bright (ie: not solid white), and the dark parts of the image are not too dark (ie: solid black).
Camera’s have a limited range of brightnesses that that they can capture. The range a camera can capture from the darkest point in the scene being just-barely black to the brightest point in the scene that can be not-quite-white is called the dynamic range. Nicer (usually newer, more expensive) cameras have larger dynamic ranges.
If we can capture our grey, detailed, image. Then that image can actually represent the world we pointed the camera at. We can do photography!
A major part of photography is adjusting three settings on the camera – the shutter speed, the aperture, and the ISO, which determine how much light gets into the camera.
The major technical goal of photography is to create an appropriate exposure. One that shows the scene with as much detail as possible: Not all black, not all white.