Our Ethics in Presenting Images

The vast majority of our images are not single shots. They are stacks of multiple exposures, stitches or numerous images, HDRs, combination of photos taken under various kinds of light, often involving hours, days, or even weeks of laborious manual work in specialised software. In other words, our work can be classified as photography-based digital art. In the core, however, our images are still photographs that depict reality.

We do not call our images traditional “wildlife photos” that were captured with one click of the shutter. However, we put effort into making our images as close to natural as possible. We do not create unnatural colours, do not add, remove, or distort body parts. We do not replace red with green, or “enhance” pale pink to turn into popping red.

There are no universal guidelines on what is acceptable in photography. Thus this topic is subject to confusion. As an example, the most misleading terms are “photoshopped,” “graded,” and a very outdated and overused term “natural.”

To understand where to start, it is simpler to explain the basics on how cameras work, or, for that matter, how our brains process images.

Image processing in the brain and in the camera

When people talk about accuracy of image acquisition, we often forget that our vision is product of activity of our brains. The human brain is not a tool with which we perceive the objective reality. The human brain is a tool that perceives things that are relevant to our survival and it does it in the most energy-efficient way.

For example, there’s no such thing as colour outside your head. Colours are born in the secondary visual cortex after the context information is processed. Wavelength of light entering the eye can be rendered radically different by the brain, depending on the context of the whole image. Colour interpretation is also shaped by our language. For example, modern people in the Himba tribe cannot distinguish blue from green despite the fact that they have all retina receptors present. In the literature, legends, and historical records of the distant past it is nearly impossible to come up with words related to “blue,” even in descriptions of water, ocean, and the sky. In other words, there’s a lot of cultural influence over our colour perception.

The ability to distinguish colour shades and recognize them quickly comes after a lot of practise, particularly in an underwater environment.

While Photoshop and similar pieces of software became synonyms of unfair image manipulation, we have to accept the fact that our brains also have internal “photoshops.” Not only does the brain make up colours, it also creates an image by adding or subtracting information to what it receives from the eyes. It manipulates contrast and adds edges to objects to perform what we call “sharpening” in editing jargon. What you see (and what enters our attention) also depends on your mood, on your past experiences, on memories, on how many times you’ve seen similar objects, etc. For example, it takes a fraction of a moment to recognize that a tree is a tree and to build a mental picture of it without counting leaves and taking careful notes on the colours. The brain simply makes up the information based on your experience to save your time.

Cameras have their own algorithms of encoding colour information. Technically, there’s no such thing as an ungraded image. Ungraded image is a sequence of zeroes and ones captured by the camera sensor and processor. You will need to tell the computer and the camera how to render those digits into something that makes sense to your brain. The whole idea of editing sensor information is to match it with one’s subjective perception.

The most common parameter that almost everyone is familiar with is white balance. It is the most important tool that matches the digital image with what the brain does internally with visual information. This type of adjustment makes sure the image doesn’t look too blue or too yellow.

Underwater, where the red part of the spectrum is absent at depth, our brains struggle to render colours. Everything appears immersed in a blue-green haze. A torch beam would appear red-ish because the brain is desperately trying to make up for the missing reds in the image. Thus in theory, there are no “correct” ways to adjust white balance underwater—this colour environment is simply outside of our colour perception. Objects’ colours also depend on the proximity to the observer or camera. For example, a supermacro image where the amount of water between the camera and the object is minimized will have the most vivid colours.

To complicate the underwater colour problem even further, most people are used to “warm” yellow lights, have rooms fitted with warm lights (with low CRI index), and train their colour vision in those spectral surroundings. Very few artists have to deal with distinguishing blue shades or working with colours under blue light illumination.

Marine animals have very complex visual systems different from ours. They have to interpret “colours” that we just classify as “blue” without further identification. They are also capable of detecting many shades of UV light.

Digital artists, photographers, and cinematographers apply the same lighting and editing rules underwater that they do on land. It is standard in underwater photography, documentary productions to to use underwater lights (neutral white or worse, warm light) to restore the missing reds in the image.

A problem with that approach becomes more critical with fluorescence. Fluorescence is a physical phenomenon where a substance transforms wavelength of light. Fluorescence introduces another dimension of colour interpretation complications as colours of corals depend on the light spectrum they are exposed to. Fluorescent colours in fact pop up underwater differently and modern screens cannot fully deliver the effect. In the green-blue ambient underwater light, these colours are strikingly intense and saturated, beyond what can be mapped within the standard digital colour spaces.

Under artificial (incorrect) light spectrum, most fluorescent colours are gone. Considering the fact that according to recent scientific literature fluorescent pigments are a major part of coral colouration, illuminating corals under incorrect spectrum produces colours that are not represented in nature.

At BioQuest Studios we are striving to reproduce natural colours that animals have under natural ambient light spectrum. We do finely adjust colour shades to match a certain visual style, however in digital imaging such adjustment is inevitable with or without one’s awareness (differences in screens, differences in cameras, differences in software settings, difference on the light in the room, etc.).

Acceptable degree of image retouching, manipulation, and composition

In the majority of cases the concerns about colour manipulation, adjusting settings in camera or computer, and combining multiple images into a single one come from the film era. Film photographers had far less control over what they could show. We inherited the perception of “image manipulation” to be a form of cheating. There are standards set by some high-profile photo contests and photography experts. Some competitions are becoming more open towards adjustments that in the past were considered to be totally unacceptable. For example, HDR and focus stacking are gradually becoming more recognized as standards. In fact, many modern digital cameras already implement HDR capabilities without some owners realizing it.

Digital imaging is both science and art. It is constantly evolving and expanding in possibilities. Digital imaging cannot be disconnected from the subjective interpretation of the artists who control the cameras and camera designers. Limiting photography by outdated standards and misinterpretations only delays the inevitable.

The rules of photography have irreversibly changed with digital imaging. Astrophysics and biology wouldn’t have developed to the modern levels, had it not been for the desire to capture more in an image than what our perception allows. Most astronomical images and what we know about the Universe are a result of incredibly sophisticated processing of information from sensors. Researchers reconstruct 3D structures from dim light coming from single cells and other objects we cannot see directly. We learn about our world by extending our senses with digital imaging.

With the images that BioQuest Studios produces we want to show you the surrounding world. However our images cannot be achieved with one actuation of a camera shutter. It is not our aim to produce images of things that you see. We make images of what you don’t see and find new possibilities.