Grace Hopper, a pioneering computer scientist and Navy rear admiral once said, “One accurate measurement is worth a thousand expert opinions,” which extended W. Edward Deming’s “Without data, you’re just another person with an opinion.” But rather than relying on accurate measurements, as Thomas Sowell put it in Discrimination and Disparities, public policy is often hamstrung by “overlooking simple but fundamental questions as to whether the numbers on which…analyses are based are in fact measuring what they seem to be measuring, or claim to be measuring,” requiring “much closer scrutiny at a fundamental level.”
That failure is far from minor. In fact, it is at the root of some truly major policy issues.
American promoters of single-payer health care systems such as Medicare for all, for example, routinely base their promises of something for nothing on huge administrative cost savings. But those cost savings are actually the product of multiple measurement errors. The reality is that such substitution would increase administrative costs, presenting us with a nothing for something deal instead.
Similarly, many have used higher measured infant mortality rates in America to attack our health care system and demand more government control as the solution. Such comparisons, however, ignore important differences in what countries count as infant deaths (as with babies who are at very high risk or who die shortly after birth, which are counted as births in the U.S. but often as stillborn in many other countries) as well as factors unrelated to health care (including the much higher proportion of teenage mothers, preterm and low-birth weight babies in the U.S. than comparison countries). Condemning conclusions cannot be reliably drawn from such biased measures.
Measurement flaws are also at the heart of even more important public debates, such as climate change and its attendant policies. Many race past such issues in a hell-bent dash to assert their conclusions and proposed impositions “follow the science” to avert climate catastrophe.
This was illustrated in H. Sterling Burnett’s recent Climate Change Weekly #442 for the Heartland Institute, reporting on the work of Anthony Watts, which showed the “U.S. Surface Station Network is Fatally Flawed.” As Burnett summarized the problem, those flaws result in “reported average temperatures being higher and trending steeper than if the system used accurate measures.” He based that conclusion on two studies by meteorologist Anthony Watts. The first, in 2009, was Is the U.S. Surface Temperature Record Reliable?, while the second, Corrupted Climate Stations: The Official U.S. Temperature Record Remains Fatally Flawed, which followed up on the first, was just published this year.
There is a great deal of meat in Watts’ publications. In particular, he presents powerful evidence that there were not only substantial upward biases in the data derived from many ground stations up until 2009, bringing into sharp question whether global warming was at all “proven.” The proportion of stations out of siting compliance has actually risen since. The GIGO (garbage in, garbage out) principle calls for severe skepticism.
The Forward to the 2022 report summarized his conclusions:
The original report found the ground-based system for measuring surface temperatures in the United States was biased by asphalt, machinery, and other heat-producing, heat-trapping, or heat-accentuating objects located near many official temperature stations and their sensory equipment. The new study reexamines these temperature stations and equipment to determine whether there remains flaws in the official U.S. surface temperature record. This report finds approximately 96 percent of U.S. temperature stations fail to meet what the National Oceanic and Atmospheric Administration (NOAA) considers to be “acceptable,” uncorrupted placement. These findings strongly undermine the legitimacy and the magnitude of the official consensus on long-term climate warming trends.
The 2009 report used a rating system based on official NOAA documents to assess each surveyed station for compliance with the official siting standard. It found that only 7.9 percent of the stations met the standards (generating an upward bias of less than 1 degree). 21.5 percent generated a likely upward bias of over 1 degree, 64.4 percent generated a likely upward bias of over 2 degrees, and 6.2 percent generated a likely upward bias of over 5 degrees. In addition, the recent study found that the higher the temperature at a given location, the greater was the bias and the less credence could be given to reported temperature increases. The 2022 report found that while some of the stations that had been ridiculed for how far out of siting compliance they were (in parking lots, right by rock or concrete walls, next to power transformers and air conditioner exhausts, even on a pole sitting in water supplying a hot spring) had been removed from the network, many were not. And 96 percent of the network now cited in official reports failed the standards.
The reports contain far more information that is routinely ignored in people’s rush to their preferred policy conclusions than I can note here, but I can recommend it as not just worth the read, but worth the look. That is because photographs document many sites’ blatant deviations from the siting rules. And most strikingly, it includes many color infrared pictures that show the extent of the localized heat bias generated by failing to follow siting standards. It illustrates that when dealing with those who want to ignore the proven bias in the data and only focus on the conclusions the corrupted data supposedly supports, an infrared picture is worth a thousand words.
The post An Infrared Picture is Worth a Thousand Words was first published by the American Institute for Economic Research (AIER), and is republished here with permission. Please support their efforts.