EPA Budget Cuts: Challenging Inertia

President Trump’s proposed budget reduction has set off howls of protest from democrats and a wide range of environmentalists.  The instantaneous reactions tell their own story and it is not one of thoughtfulness.  Smaller budget proposals are a mechanism for organizations—public and private—to reassess, prioritize, and rethink missions.

EPA was created in 1970 and its mission and approach have not changed since then.  Over the past 47 years, true to the economic theory of public choice, its scope and bureaucratic control has been an example in mission creep.

No successful organization remains the same today as it was in 1970.  Most organizations resist change as do most human beings.  In the private sector, competition, reduced demand, a new CEO or potential obsolescence are motivators for change.  With the exception of a new President, those motivating forces do not exist in the public sector.  So, rather than begin with defense for the status quo, environmentalists and democrats should let the process play out during the budget hearing phase to determine which proposed changes to support or oppose. It is almost a given that the proposed 31% reduction will not get enacted.  EPA’s budget is $8.1 billion and the proposed reduction would take it to $5.5 billion the level it was at in 1990.

When EPA was created the nation faced serious environmental problems with air and water quality, waste disposal, and exposure to toxic substances.  Since then, tremendous progress has been made as can be seen in EPA’s Report on the Environment.  As one example, air quality has shown tremendous improvement since measurements were started in the late 1970s.  Ambient levels of pollutants specified in the Clean Air Act have been reduced from 27% in the case of ozone to 89% for lead, reflecting significant reductions in emissions of covered pollutants.  Water quality has also improved but unlike air pollution, is very difficult to monitor on a national basis. Hazardous waste and toxic substances are addressed through the Resource Conservation and Recovery Act and the Toxic Substances Control Act.  Solid progress should be the major reason for rethinking its mission

While a case could be made that in the early days of environmental management, command and control was a means to make sure that all states developed programs to implement environmental improvement and compliance programs, the only justifications for command and control today are that is how it has always been done and environmental advocates don’t want to lose influence with bureaucrats.  Each state has set up environmental departments and those departments issue regular reports on compliance and progress.

A strong national commitment to environmental protection and in place compliance mechanisms make it possible to delegate implementation, compliance and enforcement to states with EPA focusing on research, technical assistance, oversight, incentives to continue making progress, and most important identifying the points of diminishing returns. Instead of micromanaging, the agency should focus on results.  How a state achieves specific environmental objectives is less important than their timely achievement.

Over the past eight years, EPA was a regulatory machine on steroids.  The number of major regulations—those with an impact of $100 million or more increased from 76 during the Bush Administration to 229 during the Obama Administration. The economic impact rose from $38 billion to over $100 billion annually.

Many of the Obama regulations were justified by questionable benefits flowing from equally questionable research and analysis.  For example, EPA asserted that further tightening of the ozone standard was justified because it would reduce the incidence of asthma attacks.  However, the incidence of asthma attacks has been increasing even as their quality was continually improving.  In justifying its Clean Power Plant rule, the agency claimed it would avoid 2700 to 6000 premature deaths.  According to CDC, there are 900,000 premature deaths annually.  EPA would have us believe that its epidemiological methodologies are sufficiently precise to measure changes between .003 and 1percent. EPA manufactured absurd results through modelling and research designed to support its beliefs; not to illuminate environmental conditions and impacts.  That approach needs to change and its research should fill gaps in knowledge while meeting objective scientific standards.

A smaller budget  changes incentives and helps reveal real environmental priorities.

EPA’s Shaky Foundation for Climate Regulations

During the Obama Administration, its climate agenda was driven by EPA using a bad Supreme Court decision to justify regulations that had no sound basis in science or economics.

In making its decision, the Court reached a number of conclusions of questionable validity.  It concluded, “Because greenhouse gases fit well within the Act’s capacious definition of “air pollutant,” EPA has statutory authority to regulate emission of such gases from new motor vehicles”… and,” A well-documented rise in global temperatures has coincided with a significant increase in the concentration of carbon dioxide in the atmosphere. Respected scientists believe the two trends are related.”  The Court also accepted the IPCC’s conclusion about humans having a “discernable influence on climate”.

For years, courts have given deference to agencies based on the Supreme Court’s Chevron decision that found that when there is ambiguity in a law, deference should be given to an agency’s interpretation.

The Bush Administration could have strongly rebutted the arguments that led to the Court’s conclusions but instead opted for challenging the petitioners standing to bring suit and some technicalities.

Beginning with the conclusion that EPA could determine that CO2 was a pollutant, the Court then easily moved to the decision that the Agency could regulate it if it found that CO2 endangered human health and welfare.  What the Court ignored was that Congress in passing the 1990 Clean Air Act amendments explicitly decided against granting EPA authority to regulate CO2.  Clearly in this instance, deference should have been given to Congressional intent.  The Court also ignored the established fact that CO2 was a nutrient; not a pollutant.  In citing the rise in global temperatures and increase in CO2 concentrations, the Court made the serious blunder of equating correlation, which really doesn’t exist, with causality.

By allowing EPA to stretch the definition of pollutant, the Court had to accept that increased concentrations of CO2 in the atmosphere would cause harm and that regulating emissions would avoid that harm.  Even if you concede the Court’s assumption, there is no way that EPA regulations would produce any beneficial effect.  To make significant reductions in atmospheric concentrations, it would be necessary for all nations to reduce emissions.  At the time of the Court decision, the only global agreement was the Kyoto Agreement, which exempted developing countries, the major source of emissions.  So, it should have been abundantly clear that granting EPA the authority to regulate would impose large costs and no warming reduction or climate benefits.

Beyond the errors cited, the Supreme Court also ignored a1993 Supreme Court decision –Daubert v. Merrell Dow Pharmaceuticals which set forth a standard for scientific evidence.  The Court stated, “ … in order to qualify as “scientific knowledge”, an inference or assertion must be derivied by the scientific method.”  It went on to state, “…in determining whether a theory …is scientific knowledge … will be whether it can be (and has been) tested.  Scientific methodology today is based on generating hypotheses and testing them to see if they can be falsified.”  The conclusions of the IPCC and the work it uses in drawing a conclusion that human activities are the primary cause of global temperature increases and associated climate events are based on climate model outputs and not the results of experiments that can be falsified.  The models have not been validated, nor have most of the assumptions incorporated in them.  It is that simple.

 

 

 

Scott Pruitt’s Heresy: Telling the Truth and Not Being Politically Correct

The chattering climate apocalyptics are in a state of high state of agitation as a result of  EPA Administrator Pruitt’s comment that “I think that measuring with precision human activity on the climate is something very challenging to do, and there’s tremendous disagreement about the degree of impact, so no, I would not agree that it’s a primary contributor to the global warming that we see.”

The main stream media and some charter members of the climate club reacted predictably by trotting out the infamous 97%, other elements of the climate orthodoxy, and hanging the label of “denialist” around his neck.  It would have been refreshing if at least one major newspaper could have asked for an explanation of why he held a view that was at odds with many scientists.  Reporting is supposed to be about digging for facts; not pre-emptive dismissal.

Mr. Pruitt did not say that the earth had not warmed, he did not deny that climate changes, and he did not say that there was no human influence on the climate system.  His sin was to challenge the certitude of the adherents to the climate orthodoxy.  There is an abundance of evidence that the case that humans are mostly responsible for warming since the middle of the last century is a construct that rests on a shaky scientific foundation.

No one disputes that CO2 is a greenhouse gas that warms the earth and no one challenges the conclusion that a doubling of CO2, absent positive feedback, would increase global temperatures by 1 degree C.  The IPCC and other climate advocates assume enhanced feedback that will cause temperatures in the future—note that it is always decades in the future—will be significantly higher.  While these assertions are made as if there is no uncertainty, the IPCC estimates that climate sensitivity is somewhere between 1.5 degrees C and 4.5 degrees C.  A factor of 3 difference undermines the certainty with which advocates make their announcements and predictions.

In addition, if the case for human causality was so compelling, climate advocates would not have to resort to statistical tricks and manipulation to mislead the public.  Each year, NASA announces with great fanfare that the current year is one of the warmest on record or in the case of 2016 was the warmest on record.  What NASA doesn’t tell is that for most of the years in question, the difference from a prior year is measured in tenths of a degree and often within the margin of measurement error.  In the case of 2016, it was warmer than recent years because of El Nino but it was not the warmest ever recorded.  1934 according to NASA data was the same as 2016.

Even the extent of warming is open to debate.  Since the end of the Little Ice Age, the US temperature record shows a cyclical pattern but also a warming trend.  Over the course of more than 120 years, surface measuring stations and devices have changed.  The location of measuring stations, the development of urban areas, and the switch from mercury thermometers to electric thermistors affect temperature recordings.  Urban development creates heat islands and adjustments are necessary to correct for the heat they retain. Those adjustments are based on beliefs and assumptions.   An audit of the more than 1200 measuring stations by Anthony Watts found that “89 percent of the stations – nearly 9 of every 10 – fail to meet the National Weather Service’s own siting requirements that stations must be 30 meters (about 100 feet) or more away from an artificial heating or radiating/ reflecting heat source.”  https://wattsupwiththat.files.wordpress.com/2009/05/surfacestationsreport_spring09.pdf

In 1954, Darrell Huff wrote the classic How to Lie with Statistics.  A sequel could be written just relying on the analytical tricks and misused statistics used by climate advocates.

The case for skepticism about the climate orthodoxy provides a strong platform for Mr. Pruitt to stand on.  If he holds the EPA staff to accepted standards of scientific evidence, the case built during the Obama Administration by Gina McCarthy will wilt away.  That would be a real public service.

 

 

 

 

Starve a Feeding Bureaucrat

Few things in life are predictable with certainty but the reaction of the climateers to the proposed cut in government climate research certainly was.

The Global Change Research Program dates back to the 1990s and the Clinton Administration.  During that time, the views of Vice President Gore and the strength of his office dominated how federal research dollars were spent. Anyone who questioned the orthodoxy need not apply.

With the election of George W Bush, the program was reoriented and placed under the Department of Commerce.  The new Climate Change Research Initiative (CCRI), was intended “to reduce significant uncertainties in climate science, improve global observing systems, develop science-based information resources to support policymaking and resource management, and communicate findings broadly among the international scientific and user communities”.

Not surprisingly, the program got captured by a bureaucracy that reflected the beliefs of Al Gore and NASA’s James Hansen.  The goal quickly shifted from gaining new scientific knowledge to “research” that would confirm the indictment of CO2 and an impending catastrophe caused by human activities such as driving, heating and lighting homes and commercial facilities, and carrying on a life that was enriched by the use of fossil fuels.

Between 2002 and 2016, the climate research program grew from about $5 billion to $7.5 billion in 2009.  According to the Congressional Budget Office, there was a dramatic increase to almost $38 billion as part of the American Recovery and Reinvestment Act.

What did we get from those dollars?  Based on the most recent report of the Intergovernmental Panel on Climate Change, not much.  We don’t know much more about natural variability, climate sensitivity, although its lower bound has been reduced, cloud formation, water vapor, or solar impacts.  In the early 2000s, NOAA did initiate an enhanced oceans observation system which has improved knowledge of oceans.

The history of the federal government’s climate research program makes clear that unbiased research is not likely to come from the federal bureaucracy.  A CATO Institute paper by Patrick Michaels– Is the Government Buying Science or Support? A Framework Analysis of Federal Funding-induced Biases—provides an insightful explanation for the biases that have been widely documented.  Michaels provides a reference to a critique of the USGCRP by Professor Judith Curry, recently retired from Georgia Tech.  She describes it as “off the rails” because it is designed to promote policies rather than advance climate science.

None of this should be surprising.  The beginning of the CATO report includes a reference to Thomas Kuhn: “fundamental beliefs can take over a scientific field”. He called these entrenched beliefs “paradigms” and noted that they tend to direct scientific thinking in specific directions. Once these beliefs become entrenched they are difficult to dislodge, despite growing evidence that they may be incorrect. Science, like any human endeavor, is subject to fads and fashions”.

James Buchanan, Nobel Laureate, who developed the public choice theory of economics proved that bureaucrats are like the rest of us.  They pursue their own self interests.  Holding government positions doesn’t convert them to angels who selflessly guard the public interest.  That doesn’t make them venal, only human.

There are a lot of questions about the climate system than need answering but the approach taken over the past several decades won’t provide robust answers.   And, just cutting budgets isn’t the answer either.  Our nation has a long history of supporting basic research that has produced new knowledge in every field of science.  Funding basic research in physics, meteorology, oceanography at well-known centers of excellence through competitive bidding is a route, but not the only one, to producing improved  understanding about our climate system as well as human impact on it.

So yes, starve a feeding bureaucrat but not our thirst for knowledge.

 

 

 

Co2: The Climate Change Light Post

The street light effect is a metaphor for knowledge and ignorance and is captured in the observation by Noam Chomsky that (climate) “Science is a bit like the joke about the drunk who is looking under a lamppost for a key that he has lost on the other side of the street, because that’s where the light is. It has no other choice.”

Climate advocates have used increases in CO2 to predict that they will lead to catastrophic warming unless action is taken to reduce emissions.  And, the models that are the foundation of climate orthodoxy have been built on the basis of this assumption because CO2 is a greenhouse gas.  But, CO2’s warming potential is not linear, so as more is added to the atmosphere the warming of each increment is less than the prior one.

Even though climate models routinely over predict the warming associated with changes in CO2 and climate history shows that the lack of causality, the assertion of catastrophic warming is accepted as fact by many policy makers, the mainstream media and scientists who succumb to groupthink by not taking the time to do their own analysis.

The only explanation for this wrong-headed  is that CO2 can be measured, it can be taxed and regulated, whereas the predominant greenhouse gas, water vapor cannot.  Climate change advocates continue to assume that increases in atmospheric concentrations of CO2 will produce a positive feedback that enhances the warming effect of CO2 emissions.  This would be caused by increases in the amount or distribution of water vapor and clouds according to a 2013 paper by Ken Gregory—Water Vapor Decline Cools the Earth: NASA Satellitia Data.  His observation is not new.  In 1990, for example, Robert Jastrow, Fred Seitz, and William Nierenberg made the same point in a George C Marshall Institute report—Global Warming—What Does The Science Tell Us.

Gregory’s analysis concludes “Climate models predict upper atmosphere moistening which triples the greenhouse effect from man-made carbon dioxide emissions. The new satellite data from the NASA water vapor project shows declining upper atmosphere water vapor during the period 1988 to 2001” which represented at the time the latest available data.

Until climate advocates abandon the GroupThink that has been their unifying force, they will be like the drunk who uses a light post for support instead of for illumination.

 

Climate Orthodoxy: The Limits of Professional Judgment

The climate orthodoxy that human activities are primarily responsible for warming since the mid-20th century rests mainly on a foundation of professional judgment, not scientific research involving falsifiable hypotheses.  That foundation is also the basis for the so-called consensus of scientists, which has been shown to be an “alternative fact.

There is nothing wrong with being guided professional judgment as long as it is recognized that professional judgment does not equate to factual certitude.  Michael Lewis in his most recent book, The Undoing Project writes about the work of two psychologists, Amos Tversky and Daniel Kahneman (a Nobel Laureate in economics) whose research completely changed our understanding of human decision making and behavioral economics.  The bottom line is that humans—no matter what their calling and education—are not as rational as they think they are.

Tversky and Kahneman’s insights on professional judgment and the use of subjective probabilities justify a reasonable level of healthy skepticism about the role of professional judgment in shaping the narrative on climate change.

All of us give weight to professionals who advise us or who are considered experts in a particular field.  However, Tversky and Kahneman demonstrate that too much weight and too little skepticism is given to professional judgment, especially on subjects not rich with empirical data.  In a paper on how people make judgments, Subjective Probability: A Judgment of Representativeness, they wrote, “The decision we make, the conclusions we reach, and the explanations we offer are usually based on our judgments of the likelihood of uncertain events … .  In … many other uncertain situations, the mind did not naturally calculate the correct odds.”    Instead, “it replaced the laws of chance with rules of thumb”. When people make judgments they compare the specific topic with a mental model.

They also addressed when the rule of thumb—heuristic—approach leads to “serious miscalculation and concluded it was when the matter involved judgments under uncertainty.  “ People do not appear to follow the calculus of chance or statistical theory of prediction.  Instead, they rely on a limited number of heuristics which sometimes yield reasonable judgments and sometimes lead to severe and systematic error.”

Their work is relevant to climate change  because it is a relatively new field of knowledge and much about how it operates is not well understood. Distinguishing between what is reasonably well known about climate processes and what represents assumptions and best judgments should be more explicit, leading to a greater level of humility and less certitude.

The weakness in this perspective is the implication that scientists who believe that the science is settled and support the climate orthodoxy have the same failing. How could that be?

Many, perhaps most, of the scientists who represented the thinking in the 1980s when climate change was just emerging issue had been active in the environmental movement and advocates of the limits of growth narrative.  Maurice Strong, an environmental activist, and John Houghton, the first chair of the IPCC, were instrumental in recruiting scientists who would support climate alarmism.  Once that core was in place, billions of dollars in funding was directed to the IPCC process and research on the climate system and the threat of it posed.  Climate change became an industry that enriched those who embraced the notion that human activities, burning fossil fuels, was causing a climate catastrophe.

Professor Paul Romer of NYU made an insightful observation about the failure of a scientific field that relies on mathematical modeling.  Although he was referring to macroeconomics, his comments are equally as applicable to climate science. “Because guidance from authority can align the efforts of many researchers, conformity to the facts is no longer needed as a coordinating device. As a result, if facts disconfirm the officially sanctioned theoretical vision, they are subordinated. Eventually, evidence stops being relevant. Progress in the field is judged by the purity of its mathematical theories, as determined by the authorities.”

That represents the Group Think phenomenon, which holds that groups with common interests or purposes seek cohesion and collegiality as the expense of independent thinking and challenge.  Those who do not conform are sanctioned.  In the case of climate change they are labeled skeptics and deniers.

 

Scientific Shenanigans Equals Loss of Credibility

The CEO of the American Academy for the Advancement of Science, Rush Holt, recently said, “scientists are partly to blame for skepticism of evidence in policy making.” He was referring to a “haughty attitude has generated a backlash with the body politic against all types of scientific evidence. That is undoubtedly true but that is a very superficial explanation. The climate change debate is a case in point.

The treatment of science and the scientific process by the climate establishment is clear evidence that scientists who promote the climate orthodoxy do not have what Holt refers to as “reverence for evidence”. Their reverence is to self interest and ideology.

Homan Jenkins of the Wall Street Journal recently exposed the shenanigans that NOAA and NASA have been engaged in reporting annual temperatures. While both organizations reported that 2016 was the warmest year on record, Jenkins pointed out that this was only the case because both agencies have since 2009 left out any reference to measurement uncertainty. When error bars are included 2016 and 2015 are essentially the same as are a number of earlier years starting with 1998.

When the difference between years is less than a tenth of a degree or a few tenths of a degree, the news value of these annual reports goes to zero. Omitting any mention of measurement uncertainly is not only misleading but also a sign of what the author Darrell Huff labeled “How to Lie With Statistics”. The problem of data manipulation is not limited to the reporting of annual temperatures. Dr. John Bates, a Department of Commerce Gold Medal winner and former principal scientist at the National Climatic Data Center exposed a 2015 NOAA study that relied on unverified data and was rushed to publication to discredit the global warming pause for the purpose of influencing negotiations at climate summit in Paris.

NOAA’s statistical chicanery with temperature data is not a case of a few well meaning scientists engaging in trickery to draw public attention to an impending climate catastrophe. It is part of a well organized initiative to promote the sustainable development agenda. In 2009, the famous Climategate scandal exposed how an influential group of scientists were engaged in a conspiracy to discredit so-called skeptics, were manipulating the peer review process for self enrichment and political ends, and distorting statistical data to advance the climate orthodoxy.

Citing the “haughty attitude of scientists as the reason for a loss of credibility is analogous to blaming the sinking of the Titanic on a weak hull. The misuse of the scientific process to exploit climate change and other scientific issues will continue as long as scientists don’t pay a price for using it to pursue political agendas and self enrichment and the leadership of scientific organizations doesn’t demand integrity, openness, and respect for dissenting views

If Mr.Holt wants to improve scientific credibility, he should take the lead in promoting an initiative to establish a higher standard of excellence and transparency for scientific research and research used for policy making. In particular, since peer review has been gamed, as Climategate revealed, the peer review process needs to be reformed. Also, since science is about challenging prevailing hypotheses and theories, dissent must be promoted and protected; not demonized.

The bottom line, is that the scientific establishment needs to do a better job of policing itself and holding scientists and scientific work to the principles set by Richard Feynman and Karl Popper.

There They Go Again

The New York Times announced that according to NASA and NOAA 2016 was the warmest year on record as had been the two preceding years. Every year at this time, the media and environmental advocates make similar announcements about the prior year either being a record setter or one of the hottest on record.

By now, most people just yawn at such announcements because they have been told the same tale for so long that it has lost its meaning, as it rightfully should. Relying on their common sense, they also realize that with the exception of a few very cold or very hot days the temperature most days is what they expect. Common sense trumps the orthodoxy.

Claims of record setting temperatures and dangerous warming make good stories but don’t conform with reality. Satellite temperature measurements show no statistically significant warming since 1998. 2016, like 1998, was an El Nino year, which means it was warmer than non El Nino years. NOAA reports that the 20th century average temperature was 52 degrees F, while the average for the years between 2000 and 2015 have been 53.3 degrees F. Clearly, the 16 years prior to 2016 years were warmer than the century average but so were the 16 years starting in 1930 which was 53.2 degrees, with 1934 was 54.9 degrees, the same as last year. A difference of 0.1 degrees for comparable periods is hardly news worthy.

Differences between recent averages average and the century average is overstated for a couple of important reasons. Newer measuring devices—thermisters—that make continuous measurements measure warmer than older thermometers which took measurements twice a day. So, earlier warming was understated and those prior measurements required adjustments, which may or may not be accurate. In addition, urban and suburban development has created more heat islands than existed for most of the 20th century. While NOAA/NASA make adjustments for those two factors, those adjustments are estimates, not precise corrections.

MIT professor emeritus Richard Lindzen’s reaction to the latest report provides a clear perspective to the annual hand wringing about annual temperature changes: “To imply that a rise of temperature of a tenth of a degree is proof that the world is coming to an end — has to take one back to the dark ages.” …“As long as you can get people excited as to whether it’s a tenth of a degree warmer or cooler, then you don’t have to think, you can assume everyone who is listening to you is an idiot.”

Scientific American: Misleading And Not Informative

A January 13 article in Scientific American attempted to undermine a statement by Rex Tillerson during his confirmation hearing. In answer to a question, he said, “said “our ability to predict” the effect of increased greenhouse gas concentrations in the atmosphere “is very limited.” Scientific American asserts, “That’s not entirely accurate. Beyond defining its own interpretation of the meaning of “is very limited”, the article conflates a scientific fact with professional judgment and computer model outputs to reach a conclusion that is not valid.

Scientific American makes the point that scientists “ability to make predictions based on a particular theory corresponds to the number of times they’ve verified that theory using different lines of evidence: The more verification, the more likely it is that their predictions will turn out to be accurate. To start, scientists have verified the theory of the greenhouse effect, which says that gases like CO2 trap the sun’s heat. It then states the obvious that the warming potential of CO2 “is no longer a matter of prediction.”

There is no disagreement that CO2 is a greenhouse gas and contributes to the planet’s warming. Without it, the average temperature would be below freezing. Beyond the fact that CO2 warms the planet, there is no evidence that the professional judgment of climate scientists and advocates is a good proxy for scientific facts concerning the climate system. And, the models that predict significant warming– that has not occurred over past 20 years– include assumptions that have not been validated. That helps to explain why the models do so poorly.

Physics has established that the warming potential of CO2 is non-linear which means that the warming associated with the next increment to the atmospheric concentration of CO2 is less than the one that preceded it. For CO2 emissions to be responsible for more than half of the warming that has occurred over the past 50 years, it would be necessary for climate sensitivity to be much greater than any empirical evidence suggests is likely.

The statements made by Scientific American concerning warming over the past 50 years are based on the subjective probabilities and professional judgment of scientists who participate in the IPCC process. Scientific American would do well to explore the research foundation of Group Think as well as the work of Daniel Kahnamen and Amos Tversky who demonstrated the biases that influence even the most capable. In short, we are not as rational as we think. Michael Lewis’ most recent book The Undoing Project is a good introduction to Kahnamen and Tversky research.

Scientific American cites the high confidence of scientists that “global warming will lead to changes in the climate, including a rise in extreme weather events and sea levels. This is also no longer a matter of prediction”. Again, that statement is based on professional judgment, not established science. Carl Wunsch, one the world’s leading oceanographers, stated some years ago that sea level has been rising since the end of the last ice age, almost 20,000 years ago, and will continue to rise until the next one. The understanding of the human contribution is in its early stages because advances in measurement technology have been in place only a little over a decade.

The claims about extreme weather are pure advocacy rhetoric as has been shown by the work of Professor Roger Pielke In testimony before the U.S. House of Representatives where he made the following summary points: “There exists exceedingly little scientific support for claims found in the media and political debate that hurricanes, tornadoes, floods and drought have increased in frequency or intensity on climate timescales [periods of the 30-50 years and longer] either in the United States or globally. These conclusions are supported by a broad scientific consensus, including that recently reported by the Intergovernmental Panel on Climate Change (IPCC) in its fifth assessment report”.

Since its series of attacks on Bjorn Lomborg’s The Skeptical Environmentalist in 2002, Scientific American has continued to demonstrate that Wall Street Journal columnist James Taranto was probably correct in labeling it a “a liberal political magazine.” It’s treatment of Rex Tillerson’s comments on climate change was political journalism.

 

 

 

 

 

Who’s the Real Denier?

Climate advocates apply the pejorative term “denier” to anyone who does not genuflect at the altar of climate orthodoxy. This term is intended to discredit, not inform. But, is it a true that those who raise questions about the climate orthodoxy are really deniers?

It is not necessary to get wrapped up in a debate about climate sensitivity, solar impacts, cloud formation or other processes that influence our climate. All that is necessary is to compare predictions that have been made since the start of the climate catastrophe campaign with the empirical evidence we have today on how the climate system has actually performed.

In 1988, then senator Al Gore and NASA scientist James Hansen made a number of alarming predictions about an impending climate apocalypse. Hansen in Congressional hearings asserted, “the greenhouse warming should be clearly identifiable in the 1990s” and “the temperature changes are sufficiently large to have major impacts on people and other parts of the biosphere, as shown by computed changes in the frequency of extreme events …” Hansen, based on his climate model, predicted that by 1997 the global temperature would rise by 0.45 degrees C and by 2010 by 2 to 4 degrees. A scientist with the Climate Research Unit at the University of East Anglia went so far as to predict that within “a few years,” snowfall would become “a very rare and exciting event” in Britain. “Children just aren’t going to know what snow is.”

Senator Gore used Senate hearings to promote Hansen’s apocalyptic vision and then went on to write Earth in the Balance documenting his view that the world faced an environmental epidemic requiring that “We must make the rescue of the environment the central organizing principle for civilization.” He used Earth in the Balance to promote and fund the climate change agenda which lead to predictions about run away global warming. Those included predicting that over the next few decades up to 60 percent of Florida’s population would have to be relocated because of rising sea level. There were also predictions about increasing droughts and extreme weather events like hurricanes.

Now that almost three decades have passed since those predictions took on a life of their own, it is possible to see the extent of their accuracy.

The forecasts made by James Hansen and supported by Al Gore that global temperature would increase 0.45 degrees C by 1997 and 2 to 4 degrees by 2010 were off by a factor of 4 and 10. Indeed, since 1998 there has been no significant increase in temperature. And, while advocates proclaim that each year is one of the warmest on record, recent temperatures are not much different than those of the early decades of the 20th century.

Clearly, Al Gore exaggerated on the need to relocate 60% of Florida’s population. In fact, since 1992, according to the Congressional Research Service, global sea level rise has averaged 0.13 inches per year or about 3 inches in total. Estimates of sea level rise along the Florida coast are higher but not alarming—around 6 inches.

That leaves predictions about increasing droughts and extreme weather—hurricanes. The data tell a much different story. Hurricane frequency has declined slightly and intensity has not changed. As for droughts, Dr. Roger Pielke in Congressional testimony stated, that droughts have “for the most part, become shorter, less frequent, and cover a smaller portion of the U. S. over the last century. Globally, “there has been little change in drought over the past 60 years.”

The comedian Groucho Marx once asked, “who are you going to believe, me or your own eyes?” Climate advocates would have everyone believe them even though empirical data doesn’t support their claims. Climate skeptics deny the faux science that is the foundation of climate orthodoxy while climate advocates deny how the climate system has actually performed. So, who is the real denier? Groucho Marx provided the easiest way to answer that question.