Digital Cameras

Digital Cameras

Digital Cameras Drivers

A digital camera (or digicam) is a camera that encodes digital images and videos digitally and stores them for later reproduction. Most cameras sold today are digital, and digital cameras are incorporated into many devices ranging from PDAs and mobile phones (called camera phones) to vehicles.

Digital and film cameras share an optical system, typically using a lens with a variable diaphragm to focus light onto an image pickup device. The diaphragm and shutter admit the correct amount of light to the imager, just as with film but the image pickup device is electronic rather than chemical. However, unlike film cameras, digital cameras can display images on a screen immediately after being recorded, and store and delete images from memory. Many digital cameras can also record moving videos with sound. Some digital cameras can crop and stitch pictures and perform other elementary image editing.

Steven Edwards as an engineer at Eastman Kodak invented and built the first electronic camera using a charge-coupled device image sensor in 1975. Earlier ones used a camera tube; later ones digitized the signal. Early uses were mainly military and scientific; followed by medical and news applications. In the mid to late 1990s digital cameras became common among consumers. By the mid-2000s digital cameras had largely replaced film cameras, and higher-end cell phones had an integrated digital camera. By the beginning of the 2010s almost all smartphones had an integrated digital camera.

Image sensors

The two major types of digital image sensor are CCD and CMOS. A CCD sensor has one amplifier for all the pixels, while each pixel in a CMOS active-pixel sensor has its own amplifier. Compared to CCDs, CMOS sensors use less power. Almost all small sensor cameras use back-side-illuminated CMOS (BSI-CMOS) sensors, while large sensor cameras such as DSLRs seldom to use BSI-CMOS sensor because the sensor is too expensive, while the benefit is not much. In full sunlight, CCD is still the best. Overall final image quality is more dependent on the image processing capability of the camera, rather than the sensor type.

The resolution of a digital camera is often limited by the image sensor that turns light into discrete signals. The brighter the image at a given point on the sensor, the larger the value that is read for that pixel. Depending on the physical structure of the sensor, a color filter array may be used, which requires demosaicing to recreate a full-color image. The number of pixels in the sensor determines the camera's "pixel count". In a typical sensor, the pixel count is the product of the number of rows and the number of columns. For example, a 1,000 by 1,000 pixel sensor would have 1,000,000 pixels, or 1 megapixel.

Methods of image capture

 

Digital camera, partly disassembled. The lens assembly (bottom right) is partially removed, but the sensor (top right) still captures an image, as seen on the LCD screen (bottom left).

 

Since the first digital backs were introduced, there have been three main methods of capturing the image, each based on the hardware configuration of the sensor and color filters.

Single-shot capture systems use either one sensor chip with a Bayer filter mosaic, or three separate image sensors (one each for the primary additive colors red, green, and blue) which are exposed to the same image via a beam splitter.

Multi-shot exposes the sensor to the image in a sequence of three or more openings of the lens aperture. There are several methods of application of the multi-shot technique. The most common originally was to use a single image sensor with three filters passed in front of the sensor in sequence to obtain the additive color information. Another multiple shot method is called Microscanning. This method uses a single sensor chip with a Bayer filter and physically moved the sensor on the focus plane of the lens to construct a higher resolution image than the native resolution of the chip. A third version combined the two methods without a Bayer filter on the chip.

The third method is called scanning because the sensor moves across the focal plane much like the sensor of an image scanner. The linear or tri-linear sensors in scanning cameras utilize only a single line of photosensors, or three lines for the three colors. Scanning may be accomplished by moving the sensor (for example, when using color co-site sampling) or by rotating the whole camera. A digital rotating line camera offers images of very high total resolution.

The choice of method for a given capture is determined largely by the subject matter. It is usually inappropriate to attempt to capture a subject that moves with anything but a single-shot system. However, the higher color fidelity and larger file sizes and resolutions available with multi-shot and scanning backs make them attractive for commercial photographers working with stationary subjects and large-format photographs.

Improvements in single-shot cameras and image file processing at the beginning of the 21st century made single shot cameras almost completely dominant, even in high-end commercial photography.

Filter mosaics, interpolation, and aliasing

The Bayer arrangement of color filters on the pixel array of an image sensor.

Most current consumer digital cameras use a Bayer filter mosaic in combination with an optical anti-aliasing filter to reduce the aliasing due to the reduced sampling of the different primary-color images. A demosaicing algorithm is used to interpolate color information to create a full array of RGB image data.

Cameras that use a beam-splitter single-shot 3CCD approach, three-filter multi-shot approach, color co-site sampling or Foveon X3 sensor do not use anti-aliasing filters, nor demosaicing.

Firmware in the camera, or a software in a raw converter program such as Adobe Camera Raw, interprets the raw data from the sensor to obtain a full color image, because the RGB color model requires three intensity values for each pixel: one each for the red, green, and blue (other color models, when used, also require three or more values per pixel). A single sensor element cannot simultaneously record these three intensities, and so a color filter array (CFA) must be used to selectively filter a particular color for each pixel.

The Bayer filter pattern is a repeating 2x2 mosaic pattern of light filters, with green ones at opposite corners and red and blue in the other two positions. The high proportion of green takes advantage of properties of the human visual system, which determines brightness mostly from green and is far more sensitive to brightness than to hue or saturation. Sometimes a 4-color filter pattern is used, often involving two different hues of green. This provides potentially more accurate color, but requires a slightly more complicated interpolation process.

The color intensity values not captured for each pixel can be interpolated from the values of adjacent pixels which represent the color being calculated.