This has nothing to do with climate or climate change, but as a photographer it interests me.
Some version of the following image has been making the rounds on social media for many years. The accompanying claim goes something like this:
“These two blocks are exactly the same shade of grey. Hold your finger over the seam and check.”
I can demonstrate that this is not the case.
The two blocks actually are very different in their shades of gray, given the source of illumination as implied by (1) the area between them and (2) the shadow below them on the ground.
If you cover up that seam (and the shadow as well), they only appear to be the same shade of gray because your brain then assumes (without any other visual cues) that they are both illuminated equally. But given the knowledge of the direction of the illumination, your brain is telling you that they really are different shades of gray.
If you still don’t believe me, you could demonstrate this with two different pieces of paper having very different shades of gray and take them out in the sun, orienting them like the two objects above. You would need to find two shades of gray (say, two paint swatch cards) where their apparent brightness (as measured by, say, taking a photo and analyzing the digital counts in Photoshop) would be approximately the same. In that case, would you say, “These two cards have the same shade of gray because I measured them in Photoshop?”
Of course not.
Now, the question arises, why do the center of the surfaces still appear to be different brightness, even though they are the same? As a photographer, I’ve noticed that when you take a photo of a very contrasty scene, your eye can see details in the shadows that the recorded camera image cannot. Similarly, very bright areas might show details to the eye, but be totally washed out in the camera image.
I don’t believe this is just the differences in dynamic range of the eye versus a camera, because the iris opening of the eye is the same for the entire scene, and the inherent integration time of the eye-brain system is presumably the same across your rods and cones. I think it’s because our brain does a sort of localized contrast enhancement within our field of view, making shadowed things seem brighter and very bright things seem dimmer. (You can make similar adjustments using “curves” in Photoshop).
It’s sort of the visual equivalent of audio compression. The brain alters perceived brightness locally to enhance contrasts. I believe this is why we photographers often use adjustments in software to get the image to look more like what our eye and brain perceived.
I just discovered that my explanation involving localized contrast enhancement seems to be supported by a 1999 article in The Journal of Neuroscience entitled, An Empirical Explanation of the Cornsweet Effect.
via Roy Spencer, PhD.
September 14, 2019 at 11:45AM