Infrared Fever Detectors Used for COVID-19 Aren’t As Accurate As You Think

The no-contact temperature screening systems can be off as much as four degrees

6 min read
A woman in a mask has her temperature taken with a handheld forehead thermometer
Photo: Getty Images

This is an edited guest post. The views expressed here are solely those of the author and do not represent the position of IEEE or The Institute.

THE INSTITUTE As part of the effort to curb the spread of the coronavirus, countries have implemented body-temperature screenings at airports, train stations, and other public spaces to detect fever. Many of the systems include the use of infrared (IR), or thermographic, cameras such as those featured in The Institute article “Thermal Cameras Are Being Outfitted to Detect Fever and Conduct Contact Tracing for COVID-19."

The market has been flooded with infrared fever-screening products, but almost none has undergone independent testing. Demand has been so great, many companies rushed into the field without understanding the accuracy requirements, and they've used technology that cannot possibly measure body temperature well enough. In many cases, their products are unable to tell the difference between core temperatures of 35 and 40 °C, or distinguish between hypothermia and a severe fever.

Some of the companies, under pressure to deliver, succumbed to the practice of averaging the measurement with a normal 37 °C. In the worst cases, they ignored inaccurate measurements and reported normal temperatures—which is as unethical as producing a COVID-19 test kit that always gives a negative result no matter what.

Typical accuracy for thermographic cameras is ±2 °C, with a few claiming accuracy as good as ±1 °C. Unfortunately, though, 1 °C accuracy isn't good enough. For fever detection to work, core body temperature needs to be estimated with an accuracy of at least ±0.5 °C. Most systems don't reach that benchmark.

Also, a body temperature reading is dependent on the ambient air temperature, and our assessment revealed that in typical testing areas a device that doesn't compensate for that—which is nearly all devices—will detect at best 15 percent of fevers.

Another surprising problem is the pixel temperatures are not independent, and the variable amount of heat given off by the rest of the face can distort the heat given off by the area used for an accurate measurement by more than 1 °C, depending on the system and how cold or covered the rest of the face is. This problem isn't widely discussed in the thermal-imaging industry.

In fact, many system designers aren't even aware of it, because it's subtle enough one could spend a career designing thermography systems and never see it, until accuracy better than 2 °C is required. Once you know how it works, it's easy to replicate.

There is good news: It is possible to perform accurate fever screening. Our research shows that by designing and integrating every component, we can detect up to 95 percent of fevers.

CURRENT SCREENING METHODS

There are three types of fever screenings commonly used in North America. Each comes with its own limitations.

Clinicians typically use oral thermometers during medical visits. Unfortunately, high-quality clinical-grade thermometers are not widely available. Furthermore, it's not efficient or safe to use them outside clinical environments, because the operator needs to be in close proximity to possibly infected people. Also, if people drink something hot or cold before getting their temperature taken or cannot breathe through their nose and must open their mouth, that will affect the results.

Noncontact infrared thermometers (NCITs), commonly known as forehead screeners, are being used at fitness centers, schools, and businesses. Many of you probably have had your temperature taken by one recently.

Some NCITs, including those with U.S. Food and Drug Administration approval, struggle to tell the difference between people with hypothermia (35 °C) and those with a severe fever (40 °C), even when operated properly in a controlled environment.

Even though NCIT sensors are accurate, they don't account for how air temperature affects their measurements. NCITs also must be operated at a consistent distance to their target, and unfortunately foreheads vary too much. If NCITs were to report temperatures in real-world conditions, often they would be absurd. Some devices do report absurd readings, while others seem to report close-to-normal temperatures most of the time. The latter devices might do a lot of averaging of temperatures. For whatever reason, some manufacturers apparently have decided that rather than figure out how to get accurate readings, they could fudge the numbers and no one would be likely to notice.

Some operators do report the ridiculous readings, while others simply ignore impossibly low readings. That makes many NCITs useless for clinical purposes. Unfortunately, though, because NCITs are the easiest thermometers to obtain, many businesses use them to meet local government requirements.

The thermal imaging field is where we are seeing lots of new products hit the market. The products can work from a safe distance automatically. Near room temperature, everything is glowing in the far infrared electromagnetic spectrum by an amount proportional to its emissivity. Thermal sensing can detect and convert the measured light into a temperature.

An NCIT uses a single pixel sensor, but it must average all temperatures it sees in its field of view—which is why it must be operated so close to the skin. Thermal-imaging systems, on the other hand, use an array of identical pixel sensors to produce images of the luminous intensity, or amount of thermal light falling on each pixel per second per solid angle. To take someone's temperature, an infrared device must first acquire an accurate surface temperature measurement of a patch of skin. Core body temperature can then be extrapolated, using a previously calibrated relationship between the skin temperature, air temperature, and core body temperature.

The system works because there is a consistently thin level of insulation between core blood and air at the inner canthus—often referred to as the tear duct—the region where the eye meets the bridge of the nose.

In our studies and in data reported by other researchers, we know the surface temperature tracks the core temperature but is reduced by a predictable fraction of the difference between core and ambient air temperature. In fact, a 4 °C change in the room air temperature will change the core temperature reading by a full 1 °C.

Despite manufacturer claims, no thermal imager has been through the FDA's device-approval process specifically for fever screening. Because of the urgent need for devices that could help fight the pandemic, the FDA released guidance in April declaring the agency did not intend to object to the sale and use of thermal-imaging devices.

However, the agency stated that such devices should (not must) follow an established standard (IEC 80601-2-59:2017) and technical report (ISO/TR 13154:2017) for thermographic fever detection. The standards were designed to minimize mistakes in performing fever detection using off-the-shelf thermographic cameras and IR calibration equipment; no device existed that was explicitly designed for that purpose.

Following the standards, however, is no guarantee the system will be able to detect feverish temperatures reliably. For example, the ISO standard allows the device to take measurements in the same manner even if air temperature changes as much as 4 °C. As discussed before, such a change will throw off the measurement enough to miss mild fevers or have at least a 50 percent false-positive rate.

Nevertheless, thermal imaging is the most promising technology, because it can operate automatically from a safe distance and, importantly, has no additional per-scan costs.

CALIBRATION NEEDED

To reach or exceed ±0.5 °Caccuracy, an IR calibration source—a blackbody—must be set to a temperature near the desired surface temperature and placed in the field of view. However, this level of accuracy will still have either a high level of false positives or false negatives. It could be higher than 20 percent, in fact. A better target is an accuracy of ±0.3 °C—which would bring the percentage of errors to single digits.

With considerable engineering effort, ±0.3 °C can be achieved in laboratory conditions. Far more challenging are real-world conditions, especially considering the pixel luminance dependence that throws off the measurement by a degree or more if not compensated for.

By redesigning the system from the ground up—in particular designing a new calibration process to compensate for pixel luminance effects—we at Fever Inspect have demonstrated ±0.3 °C accuracy is achievable in real-world conditions.

To the best of our knowledge, we are the first to calibrate and correct for the pixel luminance artifact, which is invisible in the lab but in the real world can render a system useless. Furthermore, by incorporating dual temperature references (blackbodies), a time-of-flight distance sensor array, and an ambient air temperature sensor—all linked together in a single system—we can maintain a calibrated system far better than one made up of separate components. Finally, we have developed a heated air probe that allows us to measure local air thermal conductivity, which otherwise can vary enough to throw off the surface-to-core temperature process when the air isn't perfectly still.

ACCURACY CONCERNS

Just as important as accuracy is how the device is used. The practice of checking people's temperatures when they enter a building has changed little since the first systems were developed for the 2003 SARS outbreak. The process has two major problems. The relationship between surface and core temperature can be thrown off by a person's recent exposure to hot or cold air. Also, a single measurement misses the fact that a person's core temperature varies throughout the day and that fevers develop over time. Combined, the two problems mean the old method can miss fevers.

Consequently, we advocate a more routine monitoring model that calls for taking people's temperatures two or three times during the day with a self-temperature check station near areas where people often walk by, or at a building's entrance.

The increased testing improves the chances of detecting a just-developing fever, which could easily be missed by a scan in the morning, and it means you won't run the risk of being unable to detect anything meaningful because people are still warming up after coming in from the cold.

Until the first thermographic device goes through the FDA process for its intended use, it will remain difficult for the agency to change how it regulates thermographic fever detection. Several companies have constructed devices with off-the-shelf equipment that follow the IEC/ISO standards.

The current situation provides an opportunity to improve on the old ways of temperature screening so we all can be ready for the next pandemic, as well as for other situations that might need accurate, noncontact temperature measurements.

The Conversation (1)
David Stewart01 Oct, 2021
M

Great work. I tried to inform people up here in Ontario on this subject as I have published in the late 80's using a thermography unit and having to provide cooling booths for the clients with them standing in them under controlled conditions for 25 minutes before testing.

Get unlimited IEEE Spectrum access

Become an IEEE member and get exclusive access to more stories and resources, including our vast article archive and full PDF downloads
Get access to unlimited IEEE Spectrum content
Network with other technology professionals
Establish a professional profile
Create a group to share and collaborate on projects
Discover IEEE events and activities
Join and participate in discussions

3D-Stacked CMOS Takes Moore’s Law to New Heights

When transistors can’t get any smaller, the only direction is up

10 min read
An image of stacked squares with yellow flat bars through them.
Emily Cooper
Green

Perhaps the most far-reaching technological achievement over the last 50 years has been the steady march toward ever smaller transistors, fitting them more tightly together, and reducing their power consumption. And yet, ever since the two of us started our careers at Intel more than 20 years ago, we’ve been hearing the alarms that the descent into the infinitesimal was about to end. Yet year after year, brilliant new innovations continue to propel the semiconductor industry further.

Along this journey, we engineers had to change the transistor’s architecture as we continued to scale down area and power consumption while boosting performance. The “planar” transistor designs that took us through the last half of the 20th century gave way to 3D fin-shaped devices by the first half of the 2010s. Now, these too have an end date in sight, with a new gate-all-around (GAA) structure rolling into production soon. But we have to look even further ahead because our ability to scale down even this new transistor architecture, which we call RibbonFET, has its limits.

Keep Reading ↓Show less
{"imageShortcodeIds":[]}