Home | Image Processing Tool | Technical Details | Browser Compatibility | Source Code |
Copyright 2019 by András Péter
E-Mail: [email protected]
It is important to note that due to the limitations and differences between computer screens and image capture devices this image processing tool can not show exactly what dogs can or can't perceive! It can show however the differences between what a human would perceive and what a dog would perceive.
Digital images store color information in three channels: Red, Green and
Blue. These channels correspond to the three color sensitive cell types in the
human eye. Dogs don't have separate green and red cells, but a cell type that has a sensitivity between these two.
Therefor Red and Green color infromation in an image has to be averaged to simulate how dogs see the world.
The calculation goes as follows:
Dog Blue Value = Human Blue Value
Dog Red Value = (Human Green Value + Human Red Value) / 2
To achieve that what the dog sees as white/gray (a color where all sensed components have the same intensity) will also be white/gray on the simulated image the Green Value in the simulated image is made equal to the Red Value:
Dog Green Value = Dog Red Value
During this process the overall brightness of the image can change. An additional algorithm compensates for this effect, so that in the end the processed image has the same brightness as the original.
To simulate this the brightness range of the image has to be reduced to 50%. This can be done by halving all brightness values, however this also leads to a darkening of the image. To compensate for this effect the average brightness of the image has to be calculated. The different color channels have to be weighted to account for the differing sensitivity of the human eye for the different colors.
Average brightness = Average Human Red Value × 0.30 + Average Human Green Value × 0.59 + Average Human Blue Value × 0.11
Then the dog values are modified as follows:
Dog X Value = (Dog X Value + Average brightness) / 2
By doing this the overall brightness of the image stays the same but the brightness range is half of the original.
Showing the effects of decreased visual acuity can be achieved by blurring
the image. This can be done by averaging the values of neighboring pixels. For
performance reasons I chose to use a linear blur algorithm. This algorithm
processes the image in two passes: horizontal and vertical. In the horizontal
pass at each pixel it takes the values of some of the pixels right and some of
the pixels left from the actual pixel and calculates the average of these
values. Finally it replaces the actual pixel's value with this average. The
vertical pass is the same except that the pixels above and below of the actual
pixel are used to calculate the average.
The number of neighboring pixels included in the calculation influences the
amount by which the image will be blurred. For decreasing the image details by
a factor of 2 half of the left neighbor, the actual pixel and half of the right
neighbor is used for calculating the average. For decreasing the image details
by a factor of 3 half of the second left neighbor, the left neighbor, the
actual pixel, the right neighbor and the half of the second right neighbor is
used.
However there is a problem with this approach. Blurring only with one averaging
window can leave details visible that are smaller than the averaging window.
This effect can be seen on the image bellow: the effect of an averaging window
of 2/4/6/8/10/12/14 pixels can be observed on the horizontal bands stacked
above each other.
Original
Averaging window of 2 pixels
Averaging window of 4 pixels
Averaging window of 6 pixels
Averaging window of 8 pixels
Averaging window of 10 pixels
Averaging window of 12 pixels
Averaging window of 14 pixels
To overcome this problem the blurring has to be applied incrementally. For example when wanting to reduce the image details by a factor of 4 then first a blur with an averaging window of 2 pixels, then a pass with an averaging window of 4 pixels and finally a pass with an averaging window of 6 pixels has to be done.
The color values in an image file are gamma compressed, meaning that there is not a linear relationship between the values and the luminance that they represent. Therefor before starting the image processing the color values have to be gamma expanded and after the processing gamma compressed. The JavaScript file doing the image processing can be found at this link.
![]() |
![]() |