Episode 22

22. Shades of Gray

The photographic industry once overlooked the importance of accurate skin tone reproduction for people of color. Now, new technology touted in Google’s latest smartphones promises to render darker skin tones in a more accurate way.

Show Notes

Support the show:

Transcript

The photographic industry once overlooked the importance of accurate skin tone reproduction for people of color. Now, new technology touted in Google’s latest smartphones promises to render darker skin tones in a more accurate way, but what is it actually doing?

Hello friends and welcome to this episode of Photo 365. My name is Andrew Haworth.

Before I get into this episode’s topic, I just want to preface the discussion by admitting that I may not be the right person to discuss this particular issue. I’m going to be talking about skin color, and by extension, race. As one of the palest old white guys you’re likely to meet, I realize I haven’t had first-hand experience in the type of discrimination I’m about to discuss. I’m going to stick to facts, and what I know about photography and light.

Like many of you, I tuned in to the Super Bowl earlier this year. While the game itself didn’t interest me, I do enjoy watching the ads and I was curious about the half-time show, which promised some 90s nostalgia from Dr. Dre and company. During a break, I was fascinated by an ad for the Google Pixel 6 smartphone, which uses technology it calls “Real Tone” to more accurately portray darker skin color.

If you missed the commercial, check the show notes for a link to the video. This discussion will make more sense if you’ve actually seen the images I’m about to talk about.

The ad begins with somber text on a black background, informing the viewer that “historically, camera technology hasn’t accurately represented darker skin tones.” This in fact, is true, and I’ll talk about it later in the episode.

As the ad continues, we’re shown several examples of photos of people of color in which their faces are shadowed and featureless. These are intended to show the technology problem.

They are followed by a series of photos, presumably taken with the new Pixel 6 phone, of people in a variety of skin tones. But these images are well lit, properly exposed and they all look great. The slogan “Everyone deserves to be seen as they truly are,” fades in over an image of Lizzo, whose music provides the soundtrack.

Episode 22 - Cowboys - Rembert, SC

Rembert, SC, Home of the Black Cowboy Festival, 2021

If you read the comments on YouTube on this ad – and I don’t recommend doing that unless you want to start screaming at internet trolls – you’d see allegations this is some extreme “woke” effort from Google, or that other smartphone cameras are racist or something along those lines. Google is on to something real here - no pun intended - because consumers and amateurs genuinely do have difficulty photographing darker skin tones. There are several reasons for that, none of which should be a problem for a seasoned or professional photographer using modern tools.

My only issue with the ad is that it’s simply a little deceptive, and it doesn’t provide a point of reference for us to see how Google’s technology might actually improve a photo of someone with darker skin.

The examples of “bad photos” shown at the beginning of the ad, are indeed pretty awful as far as photos go. But it’s not because the cameras that produced the images were in any way biased towards skin color, at least not that we’re aware of. It’s because the basic, fundamental principles of photography were violated. To Google’s credit, these actually DO represent the way a non-photographer would shoot, because we’ve all seen and maybe even taken images like this before.

One example is severely backlit. Pose anyone in front of a window, set your camera to auto, snap a photo and see what happens. Depending on how backlit the scene is, the camera’s light meter is going to react to the bright background and reduce the exposure of the overall scene, thus underexposing the person, making them appear darker, if not silhouetted.

Another image in the ad is simply dark and grainy. Even the lighter-skinned subjects in the image are underexposed. The final example is a misuse of the available light. The subject is facing away from a light source, and she’s already in a very dark room.

As many of you already know, we generally accept that light meters in cameras are calibrated in such a way to produce a usable exposure when metering off something that has an 18% reflectance of light. The tool to reference this has traditionally been the gray card, and more recently, tools like the X-Rite Color Checker and similar products, which incorporate not only gray, but also color swatches for more accurate rendition of color.

Episode 22 - Cowboys - Rembert, SC

Rembert, SC, Home of the Black Cowboy Festival, 2021

If you don’t have a gray card, you can aim your camera at a patch of grass, blue sky, or a region of similar value, and set your exposure accordingly. Keep in mind, camera metering systems are not all equal.

Camera meters have come a long way through the years. A film camera like the Nikon FM2 or Pentax K1000 would have used a center-weighted approach that collected light values from largely around the center of the frame. As cameras became more sophisticated, true spot metering could be achieved, and algorithmic exposure calculations based on readings from multiple segments of the viewfinder became common.

The camera I currently use takes a meter reading from its active focus point, because it assumes that area is a region I want a relatively normal exposure on. It’s not always correct, and can be easily fooled by a dark or light object. If that happens, the onus is on me to take control and adjust the exposure accordingly, using either manual controls or exposure compensation.

I haven’t found a camera yet that is smart enough to always produce the right exposure, but the fact remains, the 18% gray standard does tend to produce a good starting point, as long as you meter on something that is in fact 18% gray. But human skin isn’t always the same value as 18% gray.

Surprisingly, I’m often asked how I photograph people with dark skin. People tend to assume that it’s not possible to photograph black and white people together and have both of them properly exposed. There’s no one technique I can cite, but to keep it simple, a proper exposure along with the right kind of light, is really the key.

I developed a sensitivity, if you want to call it that, to photographing people of color at a fairly early point in my career as a photographer. Returning home after college to work at my hometown newspaper, I recall working with a photo editor who insisted direct flash must be used when photographing people with darker skin. I strongly disagreed, preferring the softer, feature-defining look of bounce flash, a technique I learned from one of my mentors. I was promptly told newspapers didn’t need that sort of “artistic shit.” Those were his words.

Episode 22 - Cowboys - Rembert, SC

Rembert, SC, Home of the Black Cowboy Festival, 2021

This photo editor, who was set in his ways and had more years of experience than I’d been alive at that point, argued that the only way to photograph people of color – black people specifically – was to blast them with direct flash. This, according to him, was the only way dark skin would reproduce well on newsprint. I think nowadays we call this “subtle racism.”

The editor’s reasoning was partially based on the limitations of the press our newspaper used. We often had issues with profiling the monitors we used to edit photos to produce accurate images on newsprint. Images always came out slightly darker on the newsprint than they appeared on-screen.

Still, I didn’t see that as any reason to blast people with direct flash, resulting in that mug shot, deer in the headlights, flat look. And I just couldn’t stand to see “evidence” of a flash at work in any image, namely the harsh shadows behind the subject.

I quickly learned ways to photograph darker skin, using natural light and bounce flash, without overexposure, overlighting, or excessive dodging of skin tone. It’s really all about using the directionality of light to create and enhance shapes. Blasting a flash right at your subject may increase the brightness of the skin, but it does nothing to create contrast and shapes. If anything, it flattens features in an unnatural way.

As photographers, we’re supposed to be experts in how light works. One of the earliest skills a photographer should learn is how to see light. It sounds obvious, but in addition to seeing, we have to understand the difference between how we see light as humans, compared to how camera sensors or film stocks see light. The difference is dynamic range – in other words, when we study a scene, we can see details in most of the brightest and darkest areas. The human eye’s abilities far exceed that of our cameras.

Episode 22 - Cowboys - Rembert, SC

Rembert, SC, Home of the Black Cowboy Festival, 2021

In film photography, we used to refer to the latitude of film – this was a measure of a film’s ability to capture details over a certain range of brightness. Films like Tri-X for instance, were said to have a wide latitude, because you could still pull details out of a poorly exposed image.

By comparison, slide films, such as Fuji Velvia, were very contrasty – bright highlights and dark shadows. Exposure had to be carefully considered. If you exposed for shadow detail, you’d destroy highlight detail and vice versa.

Digital sensors were once much like slide film, but advances in the technology have yielded sensors that rival or exceed film in regards to dynamic range. I’m often blown away while editing files from my Sony a7 III. The RAW files almost seem to have TOO MUCH dynamic range at times, and that’s a good problem to have.

Nowadays most digital cameras and smartphones have some sort of high dynamic range, or HDR mode. Once a technique that involved tripods, stacking exposures and digital tone mapping, HDR now happens instantly on your smartphone. The goal isn’t to produce those crazy hyper-colored HDR images that were in vogue around 2005 or so, but rather to use rapid multiple exposures and computational photography to capture images with decent dynamic range. When you look at an image on your phone, you may be looking at an image composed of the best of several different parts of multiple images.

Up to this point, I’ve mostly been concerned with exposure. In other words, brightness. When we start looking at the color science of the photography industry, things get especially problematic. This is the “historical problem” with technology Google referred to in their Super Bowl ad.

In the 1950s, as color film became popular, Kodak labs were issued 4x6 inch cards for certain film stocks that featured a photo of a model and a range of color bars. The idea was these cards represented an ideal skin tone and color representation for a given film stock. It was a tool for the photo lab worker to help them calibrate their printer based on the tones represented on the card. These became known as “Shirley cards,” named for Shirley Page, the white model featured on many of them.

Episode 22 - Cowboys - Rembert, SC

Rembert, SC, Home of the Black Cowboy Festival, 2021

Some photographers felt like the film itself was biased towards white skin: When I interviewed civil rights photographer Cecil Williams for an earlier episode of this podcast, he specifically spoke about how he had to devise his own strategies to process and print images to favor darker skin.

Notably, in the 1970s, the filmmaker Jean-Luc Godard refused to use Kodak film to shoot in Mozambique, declaring the film “racist” because of the way it rendered black skin.

The color science would start changing, but only after advertisers complained because their dark colored products didn’t look accurate.

It wasn’t until the mid 1990s, when digital was on its way in, and film was on its way out, that Kodak finally created multiracial Shirley cards. Yet issues with race and photographic “darkness” continue today. As humans, visual input informs the way we interpret a certain situation. We’ve been conditioned to see darkness as something sinister – in movies, villains are portrayed in the shadows, as children, we’re scared of the dark, and so on. Those principles work with photography also.

Here’s a modern example: After OJ Simpson was arrested and accused of murder in 1994, Time Magazine and Newsweek both ran nearly identical covers of the Simpson mugshot. Newsweek published the photo mostly as-is. Time significantly darkened the mug shot and added a heavy vignette that made Simpson clearly look sinister. They may have gotten away with, had Newsweek not come out with their cover the same week.

This case has often been cited as an abuse of Photoshop, but it also sparked a debate on the portrayal of black people in the media. Time would eventually replace the issues with a non-altered version of the image. The photo-illustrator responsible claimed the altered image was an “interpretation” to give the image a dramatic tone.

More recently, the Hulu show “Woke” referenced the OJ cover when the main character, the cartoonist Keef Knight, complained to his publicist that his headshot had been lightened to make him look less black. Later in the show, after he’s been banned from the agency, a highly darkened version of the same headshot is posted at a guard station, with instructions not to let him in the building. It’s funny, but like much of the humor on the show, it’s a commentary on injustice in daily life that many of us either choose to ignore or just don’t notice.

So, as photographers, what can we do?

Episode 22 - Cowboys - Rembert, SC

Rembert, SC, Home of the Black Cowboy Festival, 2021

Assuming we dial in the proper exposure, how can we be assured the skin tone is accurate? And is being completely accurate really necessary? I try to use a color checker as often as possible. If that isn’t available, I’m very careful about selecting a white balance so I can start at a neutral point and fine tune from there.

I also find incident light meters to be handy. The light meters in our cameras are reflective light meters, which measure how much light is being reflected from a subject. These are easily fooled by light and dark subjects. An incident meter measures the light falling on a subject, and therefore won’t be fooled by the lightness of a subject.

I recently did a video shoot at work that involved men and women of different ethnicities, with quite a bit of variation in skin tone. I set my white balance off an X-Rite Passport color chart, and set the exposure from the included 18% grey card. I used two cameras, one directly in front of my subject, and another on the side, opposite the main light. We used a standard three-point light setup with one large softbox as the key light, a backlight directly opposite the key, and a small light behind the subject aimed at the background to create a gradient on the gray seamless.

I did not change the exposure on either camera for the duration of the shoot, and very little post processing was needed in the end to match the shots. I know it’s hard for some folks to believe, but the proper exposure for black, white, Hispanic, and Asian, was exactly the same, because the same amount of light was falling on each subject, and I’d preset my exposure on a true known 18% gray target.

There may be times when you set an exposure that is technically correct, but you still lose details in faces of subjects. Dark high school gymnasiums come to mind here. One, the light isn’t bright enough and a longer exposure to compensate isn’t going to be enough to freeze the action. Two, the existing light doesn't have enough directionality to provide shape, shadows and highlights. This is a situation where remotely triggered strobes strategically placed around the building can help tremendously.

So back to Google’s Real Tone technology. We’ve established if we’re trying to represent skin tones properly, there are two factors at play: Exposure, or overall brightness, and color balance.

In theory, it should be easy for a computer to analyze an image and deliver proper skin tone. If you’ve ever done a deep dive into fine tuning color for video, you’ve probably used a vectorscope. It’s built-in to most video editors, and it has one valuable feature, that really helps a red-green colorblind guy like me.

Most vectorscopes have a line between yellow and red values, known as the “skin tone line.” The idea here is that if you analyze video with a vectorscope, skin tones, for all ethnicities, will fall along that line. So let’s say you open your scope, crop a frame of video to just show a patch of your subject’s skin. You should see an indication on the scope around or on the skin tone line. If the skin tones are off – meaning maybe your camera’s white balance wasn’t set properly, or maybe you were shooting in mixed lighting, the scope will reflect that and the plot won’t line up with the skin tone line.

We can then use color grading tools to adjust the color balance to make the plot on the scope line up with the skin tone line. This works with all skin, light or dark, owing to how our skin works. We generally associate melanin as the chemical that gives our skin its pigment. But, melanin reportedly only contributes to the brightness and saturation of our skin color.

Episode 22 - Cowboys - Rembert, SC

Rembert, SC, Home of the Black Cowboy Festival, 2021

The color of skin comes from our blood. And humans have the same shade of blood. The vectorscope represents just color and saturation – but not brightness. So very dark skin and very pale skin will both have the same underlying color, only their brightness and the saturation of their color will vary. The vectorscope reveals that relationship, and that’s why it’s such a useful tool for colorists.

I have to think Google is using an algorithm based on these principles for their Real Tone technology.

Some photo editing programs have a vectorscope, but Adobe’s Photoshop and Lightroom inexplicably don’t have a vectorscope. Ironically, correcting skin tones is one of the primary uses for these programs when initially toning an image.

Many color experts have created their own skin tone reference swatches, and if you know how to “read the numbers” you can sample an area of skin and glance at the RGB values to know if the tone is off. Typically, the values should show more red than green, and more green than blue. After that, it’s up to your eye to do the rest. That’s why I’m an advocate of a portable color chart nowadays – because I can’t trust my own eyes when it comes to the subtle shades of red and green contained in skin tones.

Apple’s Photos app that comes on every new Mac by default, is actually a decent tool for quickly tuning skin tone. Open the editing palettes for an image and take a look at the white balance tools. Change the setting here to prioritize skin tones for setting the white balance, then use the eyedropper to sample a skin tone from the image. Clicking on dark, medium or pale skin will produce nearly the same result. You can use the warmth slider on the same to adjust to taste.

Keep in mind, total accuracy is virtually impossible in photography. The way sensors render color, the way lenses render a scene, will never be exactly the way the human eye sees it. But at the end of the day, there’s really no reason for a decent photographer to have problems more accurately representing skin tones and exposure with the tools we have available today. Hopefully Google’s new technology will help even amateurs make their images match what they saw in their mind’s eye.

That’s going to do it for this episode of Photo 365. If you enjoyed this episode or found some useful information in it, be sure to share it with a friend. Our show is on Spotify, Google Podcasts, Apple Podcasts and everywhere else. Remember, you can find full transcripts of this and all episodes at the show website, photo365podcast.com.

Keep looking out for great images, keep shooting, and we’ll see you next time.