Infrared Cameras: How They Work

All objects emit thermal energy (heat), to a greater or lesser degree; the peak in the wavelengths that are emitted moves toward shorter wavelengths as the temperature of the object increases. This is why a piece of metal can be seen to be visibly glowing if it gets very hot, but doesn’t glow at cooler temperatures.

Infrared cameras work by converting thermal energy (from a specific spectral wave length or range) into a video signal. These wavelengths are longer than those the human eye can see. The word “infrared” means “longer than red”: in other words, the wavelengths of light that an infrared imager uses are longer than the “red” wavelengths that are visually seen (‘red’ corresponds to light with wavelengths around 0.7 microns).

Many video cameras (even consumer-grade types) are sensitive beyond the visual range, (up to about 0.9 microns; the human eye can typically see wavelengths between about 0.4 to 0.75 microns). Even though these wavelengths are considered ‘infrared’, very few objects are hot enough to emit much energy at these short wavelengths. This spectral region, from about 0.75 to about 3.0 microns, is called Near Infrared (NIR) or Short Wave Infrared (SWIR).

Since so little energy is emitted by most practical objects in this range, reflected light is needed to form an image; this means infrared illuminators must be used, which is known as “active illumination”. Illumination in this spectral range, even though it can’t be seen by the naked eye, can be detected by any imager that is sensitive in that spectral range.

However, as with our light sources, anyone with a CCD Camera can see the light source being shone much farther than the range at which the illumination will be effective for the user. For this reason, the military and other organizations that want to maintain secrecy generally do not use active infrared illumination. They instead prefer to use systems that depend on “passive” illumination: this means infrared imagers that are sensitive to longer wavelengths. Since cooler objects emit thermal energy at longer wavelengths, targets will generate their own “light” (emitted thermal energy), meaning they don’t need active illumination, and are considered passive sources. Most practical targets that are relevant for surveillance applications (people, vehicles, background, and most objects) emit thermal energy at these longer wavelengths.

Most thermal imaging systems operate at wavelengths between 3 to 14 microns. However, water vapor in the atmosphere tends to absorb the energy of wavelengths between about 5 to 7 microns, so the thermal imaging region of 3 to 14 microns is broken into two bands. The term Mid Wave Infrared (MWIR) refers to the waveband of about 3 to 5 microns, while the term Long Wave Infrared (LWIR) refers to wavelengths between 7 and 14 microns. MWIR and LWIR bands are known as “thermal imaging” (or “thermal infrared) bands. Both of these bands are useful for imaging applications, with subtle differences in performance in each band that gives each of them advantages in certain situations. There are also significant differences in the technologies used to image each of these wave bands; in the most general sense, MWIR systems tend to be somewhat more sensitive, but must be cryogenically cooled, adding to system complexity and cost. LWIR systems often do not require cryogenic cooling; this lowers cost and increases reliability, but generally at the expense of sensitivity. Both MWIR and LWIR are used extensively in surveillance systems, for different applications.

How do thermal infrared detectors work?

Infrared cameras usually work in one of two ways. They either detect the infrared (IR) photons directly or they detect small changes in temperature in an array of thermal-absorbing elements. Infrared cameras that detect photons work in one of two ways: they are either photovoltaic or photoconductive. Photovoltaic means those infrared cameras use a material that produces a voltage difference when photons of a certain wavelength hit the material. Photoconductive means the infrared cameras use a material whose electrical resistance changes when photons of a certain wavelength hit it. In either case, the array of detectors sends out the signals to a readout circuit, and then, after a fair bit of signal processing, the infrared camera turns the pattern into a grayscale image. The intensity of the signal is shown in different shades of grey. At this point, colorization may be added to the infrared camera’s output to ease interpretation.

One of the major differences is that the materials that detect photons directly, in either photovoltaic infrared cameras or photoconductive infrared cameras, only exhibit this “photon detection” capability when they are cooled to very low temperatures.

However, there are other thermal imaging cameras that don’t require this kind of cooling, because they do not directly detect photons. Instead, their imaging array is made of tiny elements which detect changes in temperature, by absorbing incoming thermal radiation. Those elements are known as microbolometers; when they absorb thermal energy, they change in temperature which then changes their electrical resistance. This change can then be read and processed into a video signal, much like in the previous example.

Do you have additional questions about how infrared cameras work? Our team of scientists, engineers and security specialists can help. Contact us today for help in meeting your security objectives.