Okay, so you know how to take photos. Now we need to talk about how to make images!
Learning image composition is about learning how to be a detective. We want to be able to look at an image and understand why it does or does not work. Then we want to be able to decide what to do about it.
Learning composition starts as an analytical process on images that have already been taken. The better one gets at this, the more of it starts happening not to images that have already been taking, but images that are being visualized in one’s head, before pressing the shutter buttons.
Examine images you have taken. Review them. Which ones do you like? Which ones do you not like? Why? What’s the difference? What’s going on?
A note on Visualization
It’s sometimes said that photographers take thousands of images without ever pressing a single button. Visualization is the art of photography. The skill of visualization (or “pre-visualziation”) often take years for photographers to develop. I believe this is because photographers try to learn their camera settings and perfectly visualize what image will get created when they push the button. That’s a hard way to go. The difference between f/8 and f/11 is hard to imagine, and it’s very hard to learn if so-and-so shutter speed will affect objects at what-or-what speed.
Instead, photographers should be focusing on the relative. They visualize, desiring, an image with various characteristics, and then finding the settings (and location/angle/etc) that get towards those characteristics.
By learning composition while learning visualization, the workflow presented is for photographers to visualize individual attributes, like brightness or blurryness of objects, and consider how that might affect the image. Eventually deciding what attribute(s) to focus on.
Then, one works from these goals to get to the right settings to get the photo. “I want this motion to be frozen, so I need a fast-enough shutter speed”. Learning what shutter speed counts as “fast-enough” is just a matter of practice and, perhaps, trial and error.
What matters is thinking about why an image may or may not work compositionally while taking the photo. One’s ability to predict what a camera will do comes second to that. Because being able to predict what an image will look like before you take it is fine and dandy and helpful, but doesn’t matter as much as knowing why to change camera settings, or where to go to get the best photo when presented with a scene. Learning composition is how to know this.
The settings don’t matter. Only the image matters.
Right then. Let’s talk about image composition, and start understanding how images work.
Ways to make the photo be about what it is about, and not about anything else. This, I believe, is the most fundamental composition technique. This is a goal to achieve in an image, a way to think about what is and isn’t working in images.
Most beginner photographers will find their best images exemplify subject isolation.
Make your image such that it is an image of one thing, and strongly of that thing.
Ways to achieve subject isolation:
Make it bigger
Cut the clutter
Deemphasize the background (blurriness or clean)
Use the rule of thirds
Try a blurry background (via depth of field or motion blur or just go somewhere blurry)
Stop talking about lens width, or which lens width you prefer. Those are crops.
Those different crops affect the way photographers approach subjects, and the subject-background ratio. They give different looks, but it’s all subject distance.
Ansel Adams in ‘The Camera’:
“…the effect of changing to a lens of longer focal length is to increase the size of the image of any part of the subject. Beginning photographers therefore often assume that the effect of changing to a longer focal length lens is equivalent to moving closer to the subject. In fact there are important differences. Since photographers often do both at the same time- change lenses and subject distance simultaneously- the effects of these two acts are often confused.”
So what does subject distance actually do?
Understanding Subject Distance
When I say “subject distance” I mean the distance between the camera and what it is taking photos of.
What I actually mean is the relative distances of everything in the scene.
Basically, because of perspective, things that are further away are smaller. When we have photos with multiple objects at different distances (ie: the subject and the background), we can affect the percieved relative sizes of these objects just by moving closer or further away.
For the curious, there are special lens, telecentric or “orthographic” lenses, that do not experience perspective distortion.
In the dolly zoom, also called the hitchcock zoom, the cinemetographer is moving the camera forwards and backwards. They are also zooming in and out at such a rate that the subject stays in the same relative location.
It’s All Relative
If I am 5 unit’s from the camera and the background is 10 units from the camera, 5 unit behind me, the background is twice as far from the camera as it is from me.
So the background is pretty big, all things considered.
If I walk towards the camera so I am only 1 unit from the camera, but the background is still 10 units from the camera, now I am 10 times closer to the camera than the background. Before I was twice as close.
I am going to be 10 times bigger than the background, while before I was only twice the size. I am going to take up a lot more space in the frame!
What Now? Math is confusing!
Relax. Just think about the relative distances to the camera everything is. Not actual feet or real life units, just how many times closer to the camera different objects are.
Lets think back to the hitchcock zoom video above (‘History of the Dolley Zoom’). Consider one of the shots, from jaws.
The top image, the camera is far away. The bottom image, the camera is close.
Relatively, the background is much closer to the camera, compared to the actor, in the top photo than the bottom photo. The actor is much closer to the camera than the background in the second photo.
In the bottom image, we can see the orange/white striped building that we couldn’t see at the beginning of the shot. When the camera got closer to the actor, it also zoomed out (so the actor stayed the same size), revealing more of the background.
More Than Just A Film Gimmick
The hitchcock zoom, in film, is just demonstrating the optical effect. In photography, how far away to stand is almost always the first decision that photographers have to make.
How do you learn this? I can show you a million photos, or you can go take a few photos of the same object while moving forwards and backwards from it. Experiment!
Zoom lenses just crop
When you zoom in or out with a zoom lens (or by switching lenses), Nothing about the image changes. Nothing except for what section is visible, it’s just a crop!
Okay, the depth of field changes. Wider lens = deeper depth of field. Nothing changes about the nature of the size of the objects in a scene. They don’t really “get bigger”, that only happens when you move closer. All you are doing, when you zoom, is crop the photo! That’s it! That’s it! Nothing else!
Move your feet to actually, you know, change the image being taken.
“If your pictures aren’t good enough, you’re not close enough.” – Robert Capa.
Stop zooming in to get closer. Start moving your feet. Making the subject larger than other object in the frame doesn’t just clearly signify it as more important. It also gives a more intimate feeling, more involved. Exaggerating object sizes to be larger than real life (compared to background) is much more interesting than a photo that puts the audience further than they normally are. Bring the audience closer than they are used to!
This is just a guideline. When starting out with photography, I highly encourage you follow it. Start moving your feet while you take photos, and see how they change.
Saturation is like the opposite of converting an image into greyscale.
Every pixel is stored as a values of red, green, or blue. These are simple numerical values between 0 and 255.
Red is (255,0,0) and green is (0,255,0).
When these values are all the same value, the pixel is greyscale. (0,0,0) is black and (255,255,255) is white.
Converting an image to greyscale involves averaging the values together. Red (255,0,0) might become (127,127,127).
If greyscale is the three values (rgb) merging towards the same color, then saturation is when these values are further apart from one another. Increasing the saturation increases the difference between these values without changing the overall tonal brightness of the pixel.
That’s the easy way to think about it. It’s also sort of wrong.
Except… changing the saturation of an image isn’t quite as simple as pushing the “R, G, and B” values further from each other, as this would eventually leave us with just red, green, and blue pixels; and would distort what the colors are percieved as during the process. A purple might become blue as it saturates. Computers need to keep the ratios between the values proportionate.
One way to do this in real life is as follows:
First, the color is converted into a different color-space. A color-space is a 3 dimensional map of colors. RGB space is a 3D space with red, green, and blue values each on an axis (think XYZ).
We convert to a different color space such as HSL. HSL is Hue, Saturation, Lightness as the axis’s, as opposed to by quantity of red,green, and blue values.
With the colors organized by hue,saturation, and lightness. Then one pushes the colors along the saturation axis easily, and then convert back to RGB space. These conversions can be done with matrix transformations, if you remember your linear algebra from college. If not, don’t worry about any of this.
In order to understand images, we need to be able to talk about them. We need to be able to identify various attributes of images, and identify how they are different from other images.
I call these tools “image properties”, but they sometimes might better be considered as “properties of things in images”. Either way, these are things we can identify about images.
Here’s a list of some common properties that an image (or a thing in an image) may have.
Focus (out of focus vs. sharp)
Facing Direction/Looking Direction
Distance from camera
Many of these are self-evident, but let’s break it down.
How bright is the object? Is it totally white or black? Over or under exposed? Are it’s edges bright, is it totally bright, and/or does it have highlights?
How Contrasty (yes that’s a real word) is it? In some ways, this is the opposite of blurriness, as contrastiness and percieved sharpness often go hand in hand.
Does this object take up a little bit of space on the histogram, or do we see a full range of tones from black to white?
What color is it? What is the nature of that color? What does that color say about the object? What does the color pallete of the image say? Are there complimentary, or other such color rules, at play?
Saturation refers the percieved color intensity of any color or object. How ‘colorful’ is it, basically? Read more on saturation.
How big is the object compared to other objects? How much space does it take up in the frame?
Where (in the 2D image plane) is the object? Is it on one of the third lines?
Where (in space) is the object? Is it near other objects? Far from them?
What is the outline of the object? Is it a blob, or defined? Does it overlap with other objects in the scene
Is the object in focus? Is all of the object in focus?
Does the object have motion blur, or is it on a background that has motion blur?
Does the object look like it should be moving, like a car with blurry wheels, or a person running.
If it does have motion blur or implied motion, does the object have space to move in the image, or is it against the edge of the frame?
Does the object have spacing around it, is it “comfortable” where it is, or does the position feel unnatural, like a person’s face up against the far side of a photo.
Facing Direction/Looking Direction
Is the object “open”, towards the camera, or “closed”, away. Is the object towards the edge of the frame or towards the middle/accross the frame.
If there are eyes, are they looking at camera, behind camera, or to the side? Is the head facing the same direction the eyes are? Is the chest (the collarbone) facing the same direction as the eyes and/or head?
Distance From Camera
How far is the object from the camera? Was the photographer fery far away, or very close? Is the object the closest thing?
Can the photographer get closer? Could they go further away?
Does the size of the object relative to other objects in the scene match with our expectations of reality, or have things become amplified? Are flat objects still flat?
If this object is one of a series of similar objects, like columns on a building, how much different in size is this object than the others?
Does the object have a natural, unnatural, or implied frame around it.
A natural frame like a window, looking through railing poles, or curved tree branches; where an object would be surrounded in a sight that may be expected “naturally”.
An unnatural frame where something has been constructed, placed, or held up to the camera.
An implied frame where there is no literal frame, just a use of shadows and other compositional elements to ‘surround’ the subject.
Is all of the subject visible? Is any part not visible implied in existence, like the top of a head. For the parts not visible, is it uncertain what is out of frame? like just how far down an iceburg goes, or how large a crowd is in an image of a close-up of a few people marching.
If somebody is reacting from something, is that something visible?
If a baseball pitcher throws a ball, do we see the ball, or just the pitcher’s posture?
It’s All Relative
I skipped many of the hypothetical questions because they are all the same: Is this the most [blank] thing in the image? Is is the brightest? The darkest? The only thing in focus? Is it the closest thing to the camera? The furthest? Does it break a pattern? These questions one can ask themselves about almost everything on this list when considering why an image does or does not work, and what to do about it.
The rule of thirds is perhaps the most fundamental rule of image making there is.
First, let’s look at some examples of photos that use the rule of thirds. Check out the following site (occasionally updated by yours truly) for a whole pile of example photos that follow this compositional guideline.
Fold a piece of paper into thirds. Do it again the other way. You know have a tic-tac-toe board of sorts, on the image. The lines and intersectiosn made by the creases, those are the third lines.
When humans look at rectangles, we tend to look at these points and lines first. We also tend to observe things that fall on these lines more than things that don’t.
This image (shot on an Olympus OM-1) puts the ladder on the left third line.
How To Use The Rule Of Thirds
Thus, when we are trying to figure out where in the frame to put something, the simple rule is this: Put it on the third. On an intersection or line.
Landscape? Don’t put the horizon line in the center, this is uncomfortable. Move it to the top or botton third line!
This image puts the horizon line, blurry as it is, on the bottom third. The subject is, of course, on the left; and their face on the top-left intersection.
Taking a close up photo of a face? Too big to put it on a third? Put the eyes on the top third. Done, now worry about something else, like the expression or the lighting.
The Rule Of Thirds is Everywhere
This rule is considered by many to be the most fundamental guideline (concrete, actionable guideline) when taking a photos. When you crop a photo in most software, thirds lines appear to help you decide. Most cameras allow you to put third lines over your viewfinder/preview to see while taking pictures.
In this photo, the subjets are on the bottom-right third line, the road on the bottom line, and the building horizon is about the top third line. I didn’t think about any of this when I took this photo, I just saw the evening light providing a rim light around my friends, a nice scene of a city; I dropped back a few steps, pulled my Olympus XA out of my pocket, and snapped the photo.
Break This Rule
Of course, with such a fundamental rule, it will get broken. All the time. I encourage you to break it! Break it on purpose, as an exercise in understanding other compositional elements, as they must be present to “take over” from this rule, which can largely be a starting point.
The easiest example is scenes with symmetry, or stronger compositional elements that matter more – like leading lines, or the photographer’s physical restrictions (I couldn’t go there, or get close enough, etc).
In this photo, the leading lines mattered more.
The second reason to break it is, by breaking it, you can draw a lot of attention to the object that “doesn’t fit”. Big pictures of walls with a subject in the corner, for example.
Okay, kids, lets talk about math for a bit. Remember back in math class, when you had the cartesian coordinate system? Also known as a two lines (an X and a Y) where we could identify points that lied at some position on both lines (again, X and Y). Right? Math? Remember?
Don’t worry, I’m going somewhere with this.
Okay, cartesian coordinate system. Easy stuff. Now think back to geometry, there was another coordinate system we cared about. There was the polar coordinate system.
The polar coordinate system didn’t identify points on a plane with an X and a Y coordinate. Instead, it used an angle and a distance (from the origin). Two numbers, just as before. Placing points on a plane, just as before. They just represent different things.
An angle and a distance turned out to be very useful when doing math on and around circular things, like… well, like math people sometimes do. They also help us as a way to think about light!
We care about the distance a light is away from the subject, and we care about the angle that that light has to the subject. It’s much more useful for us to think about the light’s location in these terms than it is to think about light’s positions to the camera (“camera right”, or whatever), or in some cartesian coordinate system (Which we would do if we had to install lights into a drop ceiling grid). sssw
Light has a distance from the subject, and an angle to the subject. For discussion, will usually assume the subject is facing the camera.
So distance and angle. Right.
If there is one attribute that matters the most, It’s probably the angle that light comes from. The angle of the shadows. We have high and low, front and side-lighting. We also have back-lighting, and, well, everything in between.
The combination of light from different directions adds it’s own host of complexities to the situation.
There is nothing magical about angle. No propery of placing a light that is inconcevable. One just has to learn to start paying attention to light around them, as well as working iteratively while shooting.
That said, here are some fundamental points about light angles to consider.
Light position relative to the camera
The closer the light is to the camera, the “smaller”, or more “flat” the lighting may appear. On-camera flashes are largely considered to be ugly, but they sure do get the job done – they don’t cast shadows on anything that’s visible in the scene. The shadows are going out at about the same angle as the perspective of the camera, so shadows don’t interfere with anything.
Light that is perpendicular to the camera is likely to cast shadows accross the scene, and highlight the edges.
Lighting to the side (but not all the way to the side) can highlight the depth that objects have (as opposed to “flat” on-camera lighting), as the shadows are more revealing of depth information in the objects. The further to the side a light is, the
Most photographers put lights somewhere between on-camera and to-the-side, finding a nice balance between the subject having depth, and the subject being too “dramatically” lit. Being lit dramatically means having elements totaly bright and totally dark very near each other.
Light going into the camera runs a lot of interesting risks, such as glare, chromatic abberation, and other technical issues that generally make images “poopy”. Light behind a subject to some degree often “wraps around” the edge, serving to separate the subject from the background visually and define the countours.
Light position relative to the subject
As mentioned before, light that is “on-axis” to the camera can be very flat, where one can’t easily identify depth or texture information in the subject. Even if the camera is not on-axis, depending on the texture, one can light objects to still appear flat, so long as the lighting is on-axis to the depth information of the subject. A photo of a brick building, for example, will have one wall (in shadow) appear flatter than the one with the light striking accross it, regardless the camera position.
Light striking “accross” a surface will highlight it’s texture, as the bumps, ridges, and so on will have highlights and shadows, and the existence of these bumps are exaggerated.
Light behind a subject may glow through translucent subjects, like paper, or make objects like hair seem to glow.
Light behind and to-the-side has the appearance of “wrapping around” a subject, as mentioned above.
Distance is a less intuitive beast than angle. But don’t worry, it isn’t hard.
The brightiness of a light is in a radical relationship to the distance. And, sadly, “radical relationship” is not as fun as it sounds. It involves math. Wait, wait, don’t leave! It’s easy math! Easier than that coordinate system stuff I mentioned above.
The short version is this: The closer to a light source a subject is, the brighter-er the subjet is lit.
The closer you are to a light source, the brighter it is. GOT IT. EASY.
There is another layer to this. It’s a logarithmic relationship and it works like this, for a subject and a background: If a subject is x times closer to a light source than a background, then the subject is x squared times brigher.
I’m going to diagram it with emoji.
Let’s say I have a subject (cat) 1 unit to a light source (sun), and a background (building) 1 unit from that subject, 2 units from the light source.
In this case, the building is 2 units from the light source, and the cat is 1. The cat is 2 times closer, and would be 4 times as bright as the background.
Let’s pretend, for example, that the sun is really far away. Just hypothetically speaking. Let’s say that the sun is 1000000 units from the light source, and the building is 1000001 units. 1000001/1000000 is 1.000001. So the cat is 1.000001 times closer, and 1.000002000001 times as bright. Or about 1 times as bright. Which is about the same brightness. (Not one times brighter, but the cats brightness times one. Which is ….the same as… the cat’s brightness).
In other words, if the light source is really far away, then two subjects will be lit with basically the same brightness. The closer one subject is to the light source than the other, the brigher that subject will be.
In other, other, words, the ratio of the distance to the light source is all we have to care about. We care about how much closer something is, and thus, how much (squared) brighter it is.
In Real Life
If we have a photo with subjects at different depths (say, a group photo) and we want it to be evenly lit, then we need to back our light source up.
Let’s say we don’t want the background to be seen at all, so we can photograph our subject against solid black. We need to move our wall further away. That’s kind of hard, so instead let’s just move our light source closer to the subject. I do this all the time.
Also, light distance affects not just two different subjects, but even one subject. (Like a face! I mean a person!). A closer light source is going to have a more dramatic feel than a further one because the light is not lighting the subject evenly. The side closer to the light source wil be noticably brighter, for the highlights and the side further will be darker. It will feel like it “falls into shadow faster” although that’s not really how it works.
Lastly, the distance affects the percieved size of the light source, which matters! We’ll talk about percieved light source size next.
Don’t Worry, We’ll come back to this.
Still confused? Relax. Coming up will be practical light setups, and exercises to practice positioning light sources.
The bigger the light, the softer it is. The smaller the light, the harder it is.
“Soft” or “Hard” light refers to the transition area between what is being lit, and what is shadow. Hard light has sharp lines for it’s shadows, like the midday sun, or a puppet show. Soft light is ‘blurry’, and transitions into shadow slower.
Soft light also brings out less texture detail, and tends to light things more evenly. It is considered great for portraiture, as it can be very flattering.
Hard light is often considered dramatic, due to it’s ability to bring out texture and detail, and cast shadows like in noir films.
Perceived size, not just size.
You could have a light source as big as the sun, but if it’s really far away, then it’s still small to the subject, so it’s going to be a hard light source. Like… the sun. The sun is a pretty good example of this.
This is why photographers tell their accountant that they need such massive studios. It’s not just because they like high ceilings, but it’s so they can back their lights far enough away in order to get the look they want to get.
In image A, below, we are lighting the subject with a single bare flash. (and the window, technically. Ignore that, that’s just so we can see the room for context) It’s a very small light source, and would produce hard shadows. If we want to soften it up, we can put a shoot-through umbrella and fire the flash into that. The umbrella diffuses the light, and now we can see the light source in B is much larger.**
If this is too large for us, the shadows a bit too soft, what can we do? We could move it further away! Like in C. What if we want much softer light? We can move the umbrella closer, as in D, and get nice soft shadows.
Moving a light source affects it’s percieved size (hard/soft shadows) and light falloff (dramatic/flat lighting). How do we manipulate these together?
Falloff: Light is more dramatic when it is closer.
Perceived size. Light sources get bigger when they get closer. But when they get closer, the light falloff is more dramatic. This leads to a certain ‘look’ when you move a softbox or an umbrella really close to a subject, soft, smooth, but dramatic light. I happen to really like this light for portraits, as seen below.
Notice how the light falls to shadow smoothly – it’s a gradient, but it falls to very dark shadows very quickly. As you can see in the reflection in the subjects eyes, the light source is very close. Camera right and up a little. It’s so close, just out of frame, that there is a noticeable difference in brightness on his forehead and his cheek, just inches further from the light.
What if we move the light source far away, but keep the light source large?
We get something like this. Soft shadows, flat light. You can see in the eyes that the key light is in the same relative place as with the above photo – same direction of light, but it’s much further away. It’s also large enough that the distance doesn’t change the relative size. The light falls to shadow much slower.
In the first photo, the background was pure black. Nothing. Here, the wall is lit by the light source, and is an even grey. If I wanted it darker I could move the wall further away. But, because moving walls is pretty difficult, I would accomplish that by moving the subject closer to the light source, relatively. Either by moving the subject closer to the light source literally, or moving the light source closer to the subject. That would move it literally closer to the wall, but remember, the distances are relative.
One more photo to analyze:
Focus on the key light, the white light, not the red fill light. the light is hard, so it goes into shadow very quickly. You can see the hard shadow line her nose makes on her face. But the light source if far away. She is the same brightness from her head to her shoulder, everything in the light’s spread. The flash if far enough away that there isn’t a change dramatic change in brightness. Contrast that with the first photo above and the bright spot on the subjects forehead.
When I am lighting a portrait, I usually start with my light source direction and distance, and then pick what modifier I want to adjust it’s size last. The reason I often work this way is because I usually don’t shoot in a studio, and I am limited by my environment. I start with looking at how much room I have, where I could put lights, what types of options are available to me. – how far away can I put lights? What color are the walls/can I bounce light off of them? And so on. From there I can usually get to a starting point, and after that it’s just experimenting and iterating.
While the properties of light are intuitive – we have been looking at things that have been light since… ever, they are also complicated. They build on each other and quickly become highly complicated. One’s first task it to learn how to identify what light sources created what image, just by looking at the image. Once you can think in this direction, from image to light sources; it’s easier to learn to go the other way, and pre-visualize what something while look like under certain lighting conditions. Once one can do that, then it’s just practice that brings one to understanding what changes they should make to work torwards a certain look.
This is true for headshots, product photography, and even naturally lit photographs. Even when you aren’t controlling light sources, you have the ability to move the subject and the camera. Or even just the camera. Start looking for light in the world and take advantage of it.
Do not underestimate the importance of lighting terms. Learn the words, and reading/learning about lighting everywhere will be easier.
Stops are Stops
Same as before, a stop is half or twice as much light. This a single unit of percieved brightness brighter or darker for the image (more or less).
“Up a stop” makes things brighter, “down a stop” darker.
Temperature is Color
Temperature, as explained in the color section. Photographers say things are “cool” or “warm”, “cooler” or “warmer”. Light is not (almost ever) referred to as cold or hot – the proper usage is relative.
Key, Fill, Back
Key, Fill, and Back doesn’t refer to anything special about any lights. You don’t buy a key light different from a fill light, although to hear photographers talk does make it sound like this.
These refer to principle lights that serve principle objectives.
Your key light is your main light. Often coming from 3/4 off camera. (45 degrees, where 0 is on the lens axis, from above).
The key light exists to give shape, detail, and definition to the subject. It’s usually the brightest light, and provides the main ‘look’, ‘style’ or ‘aesthetic’ for the image. The decisions you make on how to set this light should come first, because they have the biggest impact on your image.
The fill light fills in the shadow side. It can be a light to the other side darker, but is often a flatter (less shadows) light that is lighting the entire subject up. Usually positioned behind the camera, on the opposite side.
Fill lights get us our base exposure. If we are lighting a face, without the fill light the shadows accross the face (as given by the nose, etc) would be totally dark – which we don’t usually want.
A ‘lighting ratio’ is the ratio between the total light (key+fill) and the key light. Don’t worry about what lighting ratio’s actually are mathematically speaking, but do know that photographers desribe their lighting looks as the difference between the main light and the key light.
In loose terms, this can make an image more or less ‘dramatic’.
Note: The reflection from the eyeglasses is from the key light. I need to change something about the angles between the glasses, light, and camera, which was done by having the subject have their glasses tilt down slightly (as seen in the no-fill light image). Photographers trick: I had the subject push up on the stems of the glasses, they aren’t resting on the ears.
The back light is also called the kick light, the kicker, a rim light, or a hair light.
This is a light positioned somewhere behind the subject, used to separate them from the background by creating a rim.
This is the photo without the back light.
And this is just the back light.
Together the image is much nicer. The subject has depth and detail.
Also note the difference in color temperature. I did not match my light temperatures on purpose, because I like the look and the way the gold light works with the blonde hair.
My final image:
A background light is different then a back light.
In this case, pink. Also, this pink light is spilling off of the walls and giving our subject some back light too. That was on purpose, to reflect through the already-dyed hair.
It’s a light that, well, lights up a background. Either to provide color, to evenly light a backdrop (a good green screen needs lighting separate from the subject, shadows mess up green screens!) or to create a bright area to halo behind the subject, framing them with a pseudo-vignette.
The 3 Light Setup
The classic 3 light setup is simple. A fill light gets a base exposure. A key light gives depth and detail (and is the “main” light), and a back light seperates the subject from the background.
This is the bread and butter of practical and usable lighting.
A gobo stands for “Go Between Object”. Anything that goes between the light and the subject in order to block some light. I often need to keep my lights from spilling off onto the background or somewhere else I don’t want, and a gobo is the thing.
What actually are gobo’s? Literally anything opaque that you can position right. I often use people, if they are nearby, to hold up their hands or stand in certain spots.
A cookie is a gobo with holes in it. Or something slightly transparent. It lets some light through and can be used for interesting effects.
Often, the leaves of trees act as cookies, and the occasional ray of light gets through.
To get the following photo, I violently and randomly stabbed some cardboard with a knife and taped it like a flag to a light stand. I asked my subject if she could see the head (bright part) of my light with her left eye closed. She could, so I knew a ray of light would find her face. There was some trial and error to get the positioning right.
A snoot is a tunnel that makes light into a spotlight. I stick coffee cup holders, the ones that keep you from burning yourself, onto my lights sometimes.
A grid is basically a bunch of small snoots. You can make a grid out of a lot of straws taped together, but this is one of the light modifiers that I actually buy, since it’s basically a single piece of indestructible plastic.
Grid’s make a spot light, narrowing the beam, like a snoot, but the light from a grid gives soft edges.
Walk signs use grids so you can only see the sign if you are looking righ at it (accross the street), and thus you only see the appropriate sign.
I use grid’s all the time. I love using them very subtley, making a photo look natural light, but, well, cheating with a little extra pop on my subject to make them brighter than anything else.
A soft box is a big panel that goes around a light. It does 2 things: It diffuses the light (makes the percieved size larger) and blocks light from spilling out of the back (unlike umbrellas).
Some soft boxes have a second internal diffusion sheet inside them. This helps the front of the soft box have completely equal illumination, but eats up more light (so it’s not as bright).
I really like soft boxes.
Umbrelles are either reflective on the inside (they bounce light back) or white and diffuse (you shoot light through).
The reflective ones are more efficient, they eat up more light, but can take up more space in a studio, since the light stand is ‘in front’ of the light source. They can also, in reflections (like catchlights in the eye) show the arm