La vaca

31-05-2016  (1637 lectures) Categoria: Articles

Color vs. B&W camera pixels



Gray-Scale (Black and White) Digital Camera: In a gray-scale camera, each pixel on the sensor captures information on the quantity of light striking it, or intensity, regardless of wavelength. Each pixel in the resultant image is then represented by an 8-bit number where 0=black and 255=white, and the intermediate numbers represent the range of gray between those two extremes.

Color Digital Camera: In a color camera, each pixel is the same as those on a gray-scale camera with the exception that each pixel has a color filter in front of it so that it is measuring the intensity of the light striking it only in that color range. The color filters are either red, green or blue, measuring the intensity in the three primary colors.

Camera Resolution: Usually expressed in millions of pixels or megapixels, the camera’s spatial resolution is found by multiplying the number of horizontal pixels found on the sensor by the number of vertical pixels (e.g. a 1 Megapixel camera might have a 1,024 x 1,024 pixel sensor array).

Output Resolution: The output resolution is the spatial resolution of the final image created by the sensor in pixels. It is the same as the camera resolution, with the exception that a gray scale camera has only one intensity value per pixel, whereas in a color camera, each output pixel has three intensity values, one each for red, green and blue.
Bayer Filter Pattern: A Bayer filter is the most common color filter array (CFA) pattern used in color cameras. It is the arrangement of color filters in front of the monochrome pixels contained in a color camera (see Figure 2).

How a Color Image is Formed: As described in the definitions, a color camera actually uses a monochrome (gray-scale) sensor with a color filter array (CFA) in front of it. Typically the CFA is the Bayer pattern shown in Figure 2. Figure 3 shows the resulting pattern created by the CFA on the sensor for red, green and blue values. Note that there are actually two times (2X) the number of green sensors versus red and blue ones. The reason for this is because the human eye is most sensitive to green light, so having an emphasis on green yields an image which will be interpreted closest to “true color” by the human eye.

Demosaicing: If we were to take the direct output of the sensor as is, the output image would have far less resolution than the camera image. For example, a 1 Megapixel camera image would produce a 0.5 Megapixel green image, and 0.25 Megapixels each for the red and blue images. In order to reconstruct an output image of equal resolution to the original camera resolution, a process called demosaicing is used. Essentially, this process interpolates the two color values for each pixel that are missing by looking at neighboring pixel values for that color. There are many different interpolation algorithms, and each camera manufacturer uses their own. As a simple example, Figure 4 shows what is called bilinear interpolation, whereby for each pixel in the camera image, the neighboring nearest 8 pixels are considered.

What This Means for FlowCAM Camera Choices: Since color sensors are merely gray-scale sensors with a CFA overlay, there is really no such thing as a “color” camera. As shown above, the CFA creates a color image by sampling the three primary colors (red, green and blue) separately in physically different locations, and then making up color values at other locations via interpolation. This means that a color camera has inherently lower resolution than a gray-scale camera will. Using demosaicing, the output image is brought up to the original camera resolution with RGB values at each pixel. So, in reality, the difference in resolution is not as great as it initially appears. Where it will show the most will be on edges, where color aliasing will occur, as simulated in Figure 5.

Color versus Black & White Camera Trade-offs: As detailed above, when choosing which camera to use in your FlowCAM, the primary trade-off to be considered is spatial versus color resolution. The monochrome camera will give you higher spatial resolution. This is particularly important when looking at objects which are relatively small compared to the calibration factor at the magnification being used, where every pixel counts in determining size and shape measurements. An example of this would be 8μm-10μm size particles when using the 10x magnification, where calibration is around 0.6μm/pixel.

When making this decision, it is always important to keep in mind the end goal of imaging particle analysis, which is to separate and characterize different particle types in a heterogeneous mixture. This is done by filtering the particles using either the value or statistical filtering capabilities of VisualSpreadsheet®. If the color information is particularly useful for an application, such as in identifying plankton, then a color camera may be appropriate. Keep in mind that FlowCAM is a back-lit (brightfield) system whereby opaque particles will only be seen as a black silhouette against a white background, so there is no benefit to color in these situations. Even when the particles are transparent, if there is no strong color component to distinguish them from other particles, the loss in spatial resolution caused by using the color camera may be counter-productive to particle characterization.


Why can color cameras use lower resolution lenses than monochrome cameras?

The difference between the usual color and monochrome – ( ” black and white ” – ) cameras is an additional layer of small color filters , mostly arranged in the so-called ” Bayer pattern ” (patented by Bryce E. Bayer, 1976, employee of Eastman Kodak ) .

Bayer Pattern on image sensor

Bayer Pattern on image sensor, (C) Wikipedia

In a perfect world, a lens would map any ( arbitrarily small ) point on the object to a ( arbitrarily small ) point on the sensor. Whatever small pixel sizes could be supported .

Unfortunately, by physical laws, it is only possible to illuminate little light disks( = ” Airy disks ” ). The specified number of megapixels for a lens are a rough measure of the size of  these disks (we talk about some micrometers here) . The more megapixels, the smaller the disks.

If the image of a monochrome sensor got to be in focus, the disks must fit in the footprint of one  pixel .
On a color sensor with “Bayer pattern ”  there are red pixels only in every other column and row. Here, disks of light that fit in a single pixel , may even be undesirable:

Sample image for color moiré

Sample for color moiré (click to enlarge) , (c)Wikipedia

The disks should be so large, that one red, one blue and two green pixels are covered, eg 2×2 pixels in size . This means that the lens compared with the lens for a monochrome application may and maybe even should have a reduced resolution. To avoid the color Moiré we can use an objective with lower resolution , as stated above , or an (expensive ! ) so called . ” OLP ” filter that ensures a guaranteed minimum blur of 2 pixels . Such filters are asked if customers request videoconferencing and want avoid Color moiré . OLP = ” Optical Low Pass filter”

Lets assume the Sensor has one Million Pixels. Then it has 500.000 green, 250.000 blue and 250.000 red pixels. The software on the computer converts this to 1.000.000 pixels (an “RGB image”) that have red and green and blue intensities.
This can be done, because every pixel has direct neighbours in the other colors. Say, for the location of a green pixel the red intensity is predicted(!) by the intensity of the red pixels in the direct neighbourhood. As a side effect, the RGB image is larger on the computer than on the sensor!


B&W camera conversion

Note: We have a limited range of B&W cameras available.  Please click on the Order button to see available models.


One project we have been working on for a few years is physically converting a color DLSR camera to Black & White.  Why?  An equivalent monochrome camera will always take a much sharper image than a color camera because resolution is dependent on the color content in the picture.. To understand why, you have to understand how a color camera sensor works.  We have a rather technical explanation for those so inclined.  For the artistic types that aren't interested in the science, please just consider the two pictures below.


Stock Canon 30D taking a black and white picture                           Our modified Canon 30D with Color Filter Array (CFA) removed

Which picture looks better to you?  These pictures are an area of a larger picture zoomed to 100% in Photoshop which can be seen here.

Some history.  Kodak made a series of monochrome DSLR cameras starting with the DCS-420m (1.2 megapixel), DCS-460m (6 megapixel) and lastly the DCS-760m (improved 6 megapixel).  The DCS-760M sold for $10,000.00 in 2001.  Unfortunately, the market for B/W DSLR cameras is quite small and Kodak discontinued the line.  Today, Kodak has discontinued all DLSR cameras and the company is a small fraction of the size it once was.  No manufacturer is making a B/W consumer digital SLR camera.  Some medium format monochrome data backs are available, and they are quite expensive.

A camera sensor is composed on many different layers.  From the top, the first layers are:


bullet Microlens for focusing light on the particular pixel.  There is one microlens for each pixel.
bullet Color Filter Array (CFA).  A pattern of red, green and blue dots is photolithography printed across the surface.  Each pixel will be only one of three colors - red, green or blue.  The Color Filter Array (CFA) is usually a Red, Green, Blue, Green pattern in a Bayer pattern.  A 10 megapixel camera will have 5 million green pixels, 2.5 million red pixels and 2.5 million blue pixels.
bullet Black and White Photodiode.  Under the color filter is a black and white device that sees light.  A 10 megapixel camera will have 10 million black and white pixels.

The picture above shows the structure of a red pixel.  A green pixel will have a green color filter, and blue pixel will have a blue color filter.

Pixels are arranged usually in a Bayer Red-Green-Blue-Green pattern  Green is used twice as much as red and blue in an effort to more closely mimic the human eye which sees green better than red or blue.  Humans see green the best of any color.



Canon 450D image sensor showing the CFA


For size comparison, the lines above are 0.01mm apart


With most camera resolution reviews, tests are performed with a black and white test target under white light.  A black and white target  will show the maximum resolution of which a color camera is capable.  This is because the black and white target only has luminance (brightness) data.  The black and white target has no chrominance (color) data.  Why is this important?  Remember, for every four pixels on a color sensor, you get 2 green, 1 red and 1 blue.  So suppose the target as illuminated with a blue light?  You would only get 1/4 of the pixels possibly seeing the blue light.  Effectively, your 10 megapixel just turned into a 2.5 megapixel camera.

The color content of the picture will change a color camera's resolution. This is why a red rose sometimes looks blurry in comparison to other parts of a picture.  The red rose is only triggering 1/4 of the pixels.

To test our theory, we performed the following test.


bullet We started with two Canon 30D cameras.  We converted one to monochrome by removing the AA, microlenses and CFA.  This is much easier said than done.
bullet We used a variety of resolution test targets in a room with controlled lighting.
bullet Coastal Optics 60mm UV-VIS-APO lens set to F8 for both cameras.
bullet White light and pure red, green and blue light were used to illuminate the targets.
bullet All pictures taken in RAW.
bullet Both cameras were set to ASA 200, mirror lockup, timed shutter release on a tripod.
bullet Results were analyzed in ImageJ to measure the sharpness (Modulation Transfer Function or MTF)
bullet Results were also viewed in Photoshop


First we will show the MTF of the two cameras.  The monochrome camera is the left column.  For those who would like to understand more about MTF, Luminous Landscapes has a good discussion here.  Basically, MTF is a mathematical way to quantify lens and camera sharpness.  One can easily look at a picture and see that one is sharper than the other, but the question is "How much"?  MTF allows us to put numbers to what we can see.

The further the MTF goes to the right and the higher it is, the sharper the picture.  There are four MTF graphs for each camera. The first shows white light sharpness.  Next is green, then blue and lastly red.  The blue light was the weakest light, so the blue MTF showed the worst performance from both cameras.  You can see that the B/W camera has much higher performance.  At 0.2 cycles/pixel, the monochrome camera has twice the sharpness and at 0.30 cycles, has 5x the sharpness.


You can see that in every case, the monochrome camera substantially outperforms the color, stock version.


Real life performance.  The titles of the different pictures will tell you about the particular test.  First, we show the color camera taking the same picture under green, blue, red and white light.  Note that the green picture has the highest resolution because there are twice as many green pixels compared to red or blue.  Notice that the blue picture shows the highest noise which may relate to the blue light being dimmest and/or the response of the sensor to blue light.  Notice the difference between the color and monochrome cameras.

You can click on the pictures to download a TIFF version.

At 50% in Photoshop



At 100% in Photoshop


That's why a monochrome sensor has an advantage over a color camera when taking black and white pictures.


Concept of Bits Per Pixel - TutorialsPoint

A beginner's tutorial containing complete knowledge of photography, camera, ... This table shows different bits per pixel and the amount of color they contain. ... you calculate the black and white color value, then you can calculate the pixel ...