Hand Wringing or Serious Threat?

New Jersey Senator Robert Menendez is expressing concern that the potential purchase of Citgo assets from Venezuela by Russia’s Rosneft could represent “threats posed to our national security, economy and energy independence.”   Currently, Rosneft, a Russian owned oil company, holds 49.9% ownership in Citgo as security for a loan made last year to Venezuela.  Menendez and several other members of Congress have written to the Treasury Secretary expressing concern about the prospect of Rosneft gaining control of Citgo.

Given Russia’s meddling in last year’s election and what could involve broader use of cyber warfare to be politically disruptive, a higher level of concern is justifiable.  Instead of jumping into the competition for news coverage, it would be more reassuring if those members of Congress first gathered facts and analyzed them.

Citgo has had a substantial presence in the US for decades.  Its gasoline is sold through about 6,500 independent outlets in 28 US states and Citgo Petroleum owns oil refineries in Illinois, Louisiana, and Texas.  While that would represent a substantial Russian investment and footprint in the US, it would not be the only one.  Since 2000, Lukoil, one of the 20 largest oil companies in the world, has been selling petroleum products in 11 east coast states and the District of Columbia.

While adding Citgo to its US portfolio would allow Russia to engage in serious disruptive mischief, the question is would it?  Probably not.  Putin and his oligarch cronies, who have enriched themselves by plundering Russia’s resources, have moved their wealth overseas to protect it and see it grow.  Putting a large investment such as Citgo at risk would not be in their self interest.  Any action to disrupt our energy market could be countered in a number of ways and not just including freezing their assets.

Oil is fungible, so any effort to manipulate the flow to Citgo refineries could be offset after initial disruption. When refineries go down, except in California, others move quickly to make up the shortfall. Any action that constrained product sales would just expand competitors market opportunities  without a long term effect on price.  That is how competitive markets work.

Foreign investment can contribute to a more productive relationship between the US and Russia, which is sorely needed.

Congressional due diligence is appropriate; ill informed fear mongering isn’t.

 

 

Irrational Exuberance

That is the phrase made famous by Alan Greenspan in describing the dot com bubble of the late 1990s.  Today, it could be applied to the investment bubble in Tesla.  The Wall Street Journal reported that Tesla has surpassed Ford Motors in terms of investor value even though it sales are 1% of Fords and Tesla’s earning per share and net income have been negative since it came into being. Since there are no signs of it earning a profit anytime soon, especially if government subsidies are taken away, investors, except the shrewd ones, are displaying irrational exhuberance.

The LA Times ran an article on Elon Musk two years ago and made the point that Tesla wouldn’t be around without the $4.9 billion in government subsidies.  Musk has raised crony capitalism to an art form and has perfected the Bootlegger and Baptist scheme for getting very rich by becoming an environmental icon.

Without continued taxpayer funding, Tesla would have gone bankrupt as Solyndra and A123 systems did.  As the LA Times pointed out, the Model S, which is the wealthy’s symbol of environmental political correctness, “sells for more than $100,000, but that is literally tens of thousands of dollars less than it costs to manufacture and sell.”

The Times went on to say, “Every time a Tesla is sold, we witness a transfer of wealth to a rich hobbyist (most Teslas are their owners’ third or fourth car), while average Americans are on the hook for at least $30,000 in federal and state subsidies. Tesla is more a regulatory arbitrageur than an auto manufacturer.”

In addition, to the $7500 federal tax credit, a number of states provide credits or rebates to Tesla buyers.  Not surprising, the most outrageous is California which created a zero emission mandate requiring an arbitrary number of “zero-emission” vehicles to be sold each year. Tesla’s Model S earns four emission credits per unit sold which it then sells to other manufacturers for $20,000 with the cost borne by California taxpayers.  A few years ago, Tesla received over $129 million in these credits but still lost $61 million in its manufacturing and sales.  Clearly you can’t go broke as long as you are spending someone else’s money.

The image of a zero emission, high mileage vehicle that is affordable for most is almost every driver’s dream.  And, Tesla is promising a new lower cost Model-3 in the near future–$35,000.  Investors must be betting on economies of scale lowering cost enough, including battery costs, to make the Model-3 really affordable with a range greater than 200 miles between charges.  Absent a battery technology breakthrough, which doesn’t appear likely any time soon, lithium ion batteries and their costs will limit Tesla’s range and the potential loss of subsidies limit Tesla’s future.  Ford has the brighter future.

Early investors who sell before reality strikes will make a killing, leaving cult investors who bet on a dream holding the proverbial bag.  Tesla stock is likely to be our generation’s version of Holland’s Tulip Mania in the 1830s.

 

EPA Budget Cuts: Challenging Inertia

President Trump’s proposed budget reduction has set off howls of protest from democrats and a wide range of environmentalists.  The instantaneous reactions tell their own story and it is not one of thoughtfulness.  Smaller budget proposals are a mechanism for organizations—public and private—to reassess, prioritize, and rethink missions.

EPA was created in 1970 and its mission and approach have not changed since then.  Over the past 47 years, true to the economic theory of public choice, its scope and bureaucratic control has been an example in mission creep.

No successful organization remains the same today as it was in 1970.  Most organizations resist change as do most human beings.  In the private sector, competition, reduced demand, a new CEO or potential obsolescence are motivators for change.  With the exception of a new President, those motivating forces do not exist in the public sector.  So, rather than begin with defense for the status quo, environmentalists and democrats should let the process play out during the budget hearing phase to determine which proposed changes to support or oppose. It is almost a given that the proposed 31% reduction will not get enacted.  EPA’s budget is $8.1 billion and the proposed reduction would take it to $5.5 billion the level it was at in 1990.

When EPA was created the nation faced serious environmental problems with air and water quality, waste disposal, and exposure to toxic substances.  Since then, tremendous progress has been made as can be seen in EPA’s Report on the Environment.  As one example, air quality has shown tremendous improvement since measurements were started in the late 1970s.  Ambient levels of pollutants specified in the Clean Air Act have been reduced from 27% in the case of ozone to 89% for lead, reflecting significant reductions in emissions of covered pollutants.  Water quality has also improved but unlike air pollution, is very difficult to monitor on a national basis. Hazardous waste and toxic substances are addressed through the Resource Conservation and Recovery Act and the Toxic Substances Control Act.  Solid progress should be the major reason for rethinking its mission

While a case could be made that in the early days of environmental management, command and control was a means to make sure that all states developed programs to implement environmental improvement and compliance programs, the only justifications for command and control today are that is how it has always been done and environmental advocates don’t want to lose influence with bureaucrats.  Each state has set up environmental departments and those departments issue regular reports on compliance and progress.

A strong national commitment to environmental protection and in place compliance mechanisms make it possible to delegate implementation, compliance and enforcement to states with EPA focusing on research, technical assistance, oversight, incentives to continue making progress, and most important identifying the points of diminishing returns. Instead of micromanaging, the agency should focus on results.  How a state achieves specific environmental objectives is less important than their timely achievement.

Over the past eight years, EPA was a regulatory machine on steroids.  The number of major regulations—those with an impact of $100 million or more increased from 76 during the Bush Administration to 229 during the Obama Administration. The economic impact rose from $38 billion to over $100 billion annually.

Many of the Obama regulations were justified by questionable benefits flowing from equally questionable research and analysis.  For example, EPA asserted that further tightening of the ozone standard was justified because it would reduce the incidence of asthma attacks.  However, the incidence of asthma attacks has been increasing even as their quality was continually improving.  In justifying its Clean Power Plant rule, the agency claimed it would avoid 2700 to 6000 premature deaths.  According to CDC, there are 900,000 premature deaths annually.  EPA would have us believe that its epidemiological methodologies are sufficiently precise to measure changes between .003 and 1percent. EPA manufactured absurd results through modelling and research designed to support its beliefs; not to illuminate environmental conditions and impacts.  That approach needs to change and its research should fill gaps in knowledge while meeting objective scientific standards.

A smaller budget  changes incentives and helps reveal real environmental priorities.

EPA’s Shaky Foundation for Climate Regulations

During the Obama Administration, its climate agenda was driven by EPA using a bad Supreme Court decision to justify regulations that had no sound basis in science or economics.

In making its decision, the Court reached a number of conclusions of questionable validity.  It concluded, “Because greenhouse gases fit well within the Act’s capacious definition of “air pollutant,” EPA has statutory authority to regulate emission of such gases from new motor vehicles”… and,” A well-documented rise in global temperatures has coincided with a significant increase in the concentration of carbon dioxide in the atmosphere. Respected scientists believe the two trends are related.”  The Court also accepted the IPCC’s conclusion about humans having a “discernable influence on climate”.

For years, courts have given deference to agencies based on the Supreme Court’s Chevron decision that found that when there is ambiguity in a law, deference should be given to an agency’s interpretation.

The Bush Administration could have strongly rebutted the arguments that led to the Court’s conclusions but instead opted for challenging the petitioners standing to bring suit and some technicalities.

Beginning with the conclusion that EPA could determine that CO2 was a pollutant, the Court then easily moved to the decision that the Agency could regulate it if it found that CO2 endangered human health and welfare.  What the Court ignored was that Congress in passing the 1990 Clean Air Act amendments explicitly decided against granting EPA authority to regulate CO2.  Clearly in this instance, deference should have been given to Congressional intent.  The Court also ignored the established fact that CO2 was a nutrient; not a pollutant.  In citing the rise in global temperatures and increase in CO2 concentrations, the Court made the serious blunder of equating correlation, which really doesn’t exist, with causality.

By allowing EPA to stretch the definition of pollutant, the Court had to accept that increased concentrations of CO2 in the atmosphere would cause harm and that regulating emissions would avoid that harm.  Even if you concede the Court’s assumption, there is no way that EPA regulations would produce any beneficial effect.  To make significant reductions in atmospheric concentrations, it would be necessary for all nations to reduce emissions.  At the time of the Court decision, the only global agreement was the Kyoto Agreement, which exempted developing countries, the major source of emissions.  So, it should have been abundantly clear that granting EPA the authority to regulate would impose large costs and no warming reduction or climate benefits.

Beyond the errors cited, the Supreme Court also ignored a1993 Supreme Court decision –Daubert v. Merrell Dow Pharmaceuticals which set forth a standard for scientific evidence.  The Court stated, “ … in order to qualify as “scientific knowledge”, an inference or assertion must be derivied by the scientific method.”  It went on to state, “…in determining whether a theory …is scientific knowledge … will be whether it can be (and has been) tested.  Scientific methodology today is based on generating hypotheses and testing them to see if they can be falsified.”  The conclusions of the IPCC and the work it uses in drawing a conclusion that human activities are the primary cause of global temperature increases and associated climate events are based on climate model outputs and not the results of experiments that can be falsified.  The models have not been validated, nor have most of the assumptions incorporated in them.  It is that simple.

 

 

 

Scott Pruitt’s Heresy: Telling the Truth and Not Being Politically Correct

The chattering climate apocalyptics are in a state of high state of agitation as a result of  EPA Administrator Pruitt’s comment that “I think that measuring with precision human activity on the climate is something very challenging to do, and there’s tremendous disagreement about the degree of impact, so no, I would not agree that it’s a primary contributor to the global warming that we see.”

The main stream media and some charter members of the climate club reacted predictably by trotting out the infamous 97%, other elements of the climate orthodoxy, and hanging the label of “denialist” around his neck.  It would have been refreshing if at least one major newspaper could have asked for an explanation of why he held a view that was at odds with many scientists.  Reporting is supposed to be about digging for facts; not pre-emptive dismissal.

Mr. Pruitt did not say that the earth had not warmed, he did not deny that climate changes, and he did not say that there was no human influence on the climate system.  His sin was to challenge the certitude of the adherents to the climate orthodoxy.  There is an abundance of evidence that the case that humans are mostly responsible for warming since the middle of the last century is a construct that rests on a shaky scientific foundation.

No one disputes that CO2 is a greenhouse gas that warms the earth and no one challenges the conclusion that a doubling of CO2, absent positive feedback, would increase global temperatures by 1 degree C.  The IPCC and other climate advocates assume enhanced feedback that will cause temperatures in the future—note that it is always decades in the future—will be significantly higher.  While these assertions are made as if there is no uncertainty, the IPCC estimates that climate sensitivity is somewhere between 1.5 degrees C and 4.5 degrees C.  A factor of 3 difference undermines the certainty with which advocates make their announcements and predictions.

In addition, if the case for human causality was so compelling, climate advocates would not have to resort to statistical tricks and manipulation to mislead the public.  Each year, NASA announces with great fanfare that the current year is one of the warmest on record or in the case of 2016 was the warmest on record.  What NASA doesn’t tell is that for most of the years in question, the difference from a prior year is measured in tenths of a degree and often within the margin of measurement error.  In the case of 2016, it was warmer than recent years because of El Nino but it was not the warmest ever recorded.  1934 according to NASA data was the same as 2016.

Even the extent of warming is open to debate.  Since the end of the Little Ice Age, the US temperature record shows a cyclical pattern but also a warming trend.  Over the course of more than 120 years, surface measuring stations and devices have changed.  The location of measuring stations, the development of urban areas, and the switch from mercury thermometers to electric thermistors affect temperature recordings.  Urban development creates heat islands and adjustments are necessary to correct for the heat they retain. Those adjustments are based on beliefs and assumptions.   An audit of the more than 1200 measuring stations by Anthony Watts found that “89 percent of the stations – nearly 9 of every 10 – fail to meet the National Weather Service’s own siting requirements that stations must be 30 meters (about 100 feet) or more away from an artificial heating or radiating/ reflecting heat source.”  https://wattsupwiththat.files.wordpress.com/2009/05/surfacestationsreport_spring09.pdf

In 1954, Darrell Huff wrote the classic How to Lie with Statistics.  A sequel could be written just relying on the analytical tricks and misused statistics used by climate advocates.

The case for skepticism about the climate orthodoxy provides a strong platform for Mr. Pruitt to stand on.  If he holds the EPA staff to accepted standards of scientific evidence, the case built during the Obama Administration by Gina McCarthy will wilt away.  That would be a real public service.

 

 

 

 

Starve a Feeding Bureaucrat

Few things in life are predictable with certainty but the reaction of the climateers to the proposed cut in government climate research certainly was.

The Global Change Research Program dates back to the 1990s and the Clinton Administration.  During that time, the views of Vice President Gore and the strength of his office dominated how federal research dollars were spent. Anyone who questioned the orthodoxy need not apply.

With the election of George W Bush, the program was reoriented and placed under the Department of Commerce.  The new Climate Change Research Initiative (CCRI), was intended “to reduce significant uncertainties in climate science, improve global observing systems, develop science-based information resources to support policymaking and resource management, and communicate findings broadly among the international scientific and user communities”.

Not surprisingly, the program got captured by a bureaucracy that reflected the beliefs of Al Gore and NASA’s James Hansen.  The goal quickly shifted from gaining new scientific knowledge to “research” that would confirm the indictment of CO2 and an impending catastrophe caused by human activities such as driving, heating and lighting homes and commercial facilities, and carrying on a life that was enriched by the use of fossil fuels.

Between 2002 and 2016, the climate research program grew from about $5 billion to $7.5 billion in 2009.  According to the Congressional Budget Office, there was a dramatic increase to almost $38 billion as part of the American Recovery and Reinvestment Act.

What did we get from those dollars?  Based on the most recent report of the Intergovernmental Panel on Climate Change, not much.  We don’t know much more about natural variability, climate sensitivity, although its lower bound has been reduced, cloud formation, water vapor, or solar impacts.  In the early 2000s, NOAA did initiate an enhanced oceans observation system which has improved knowledge of oceans.

The history of the federal government’s climate research program makes clear that unbiased research is not likely to come from the federal bureaucracy.  A CATO Institute paper by Patrick Michaels– Is the Government Buying Science or Support? A Framework Analysis of Federal Funding-induced Biases—provides an insightful explanation for the biases that have been widely documented.  Michaels provides a reference to a critique of the USGCRP by Professor Judith Curry, recently retired from Georgia Tech.  She describes it as “off the rails” because it is designed to promote policies rather than advance climate science.

None of this should be surprising.  The beginning of the CATO report includes a reference to Thomas Kuhn: “fundamental beliefs can take over a scientific field”. He called these entrenched beliefs “paradigms” and noted that they tend to direct scientific thinking in specific directions. Once these beliefs become entrenched they are difficult to dislodge, despite growing evidence that they may be incorrect. Science, like any human endeavor, is subject to fads and fashions”.

James Buchanan, Nobel Laureate, who developed the public choice theory of economics proved that bureaucrats are like the rest of us.  They pursue their own self interests.  Holding government positions doesn’t convert them to angels who selflessly guard the public interest.  That doesn’t make them venal, only human.

There are a lot of questions about the climate system than need answering but the approach taken over the past several decades won’t provide robust answers.   And, just cutting budgets isn’t the answer either.  Our nation has a long history of supporting basic research that has produced new knowledge in every field of science.  Funding basic research in physics, meteorology, oceanography at well-known centers of excellence through competitive bidding is a route, but not the only one, to producing improved  understanding about our climate system as well as human impact on it.

So yes, starve a feeding bureaucrat but not our thirst for knowledge.

 

 

 

Co2: The Climate Change Light Post

The street light effect is a metaphor for knowledge and ignorance and is captured in the observation by Noam Chomsky that (climate) “Science is a bit like the joke about the drunk who is looking under a lamppost for a key that he has lost on the other side of the street, because that’s where the light is. It has no other choice.”

Climate advocates have used increases in CO2 to predict that they will lead to catastrophic warming unless action is taken to reduce emissions.  And, the models that are the foundation of climate orthodoxy have been built on the basis of this assumption because CO2 is a greenhouse gas.  But, CO2’s warming potential is not linear, so as more is added to the atmosphere the warming of each increment is less than the prior one.

Even though climate models routinely over predict the warming associated with changes in CO2 and climate history shows that the lack of causality, the assertion of catastrophic warming is accepted as fact by many policy makers, the mainstream media and scientists who succumb to groupthink by not taking the time to do their own analysis.

The only explanation for this wrong-headed  is that CO2 can be measured, it can be taxed and regulated, whereas the predominant greenhouse gas, water vapor cannot.  Climate change advocates continue to assume that increases in atmospheric concentrations of CO2 will produce a positive feedback that enhances the warming effect of CO2 emissions.  This would be caused by increases in the amount or distribution of water vapor and clouds according to a 2013 paper by Ken Gregory—Water Vapor Decline Cools the Earth: NASA Satellitia Data.  His observation is not new.  In 1990, for example, Robert Jastrow, Fred Seitz, and William Nierenberg made the same point in a George C Marshall Institute report—Global Warming—What Does The Science Tell Us.

Gregory’s analysis concludes “Climate models predict upper atmosphere moistening which triples the greenhouse effect from man-made carbon dioxide emissions. The new satellite data from the NASA water vapor project shows declining upper atmosphere water vapor during the period 1988 to 2001” which represented at the time the latest available data.

Until climate advocates abandon the GroupThink that has been their unifying force, they will be like the drunk who uses a light post for support instead of for illumination.

 

Climate Orthodoxy: The Limits of Professional Judgment

The climate orthodoxy that human activities are primarily responsible for warming since the mid-20th century rests mainly on a foundation of professional judgment, not scientific research involving falsifiable hypotheses.  That foundation is also the basis for the so-called consensus of scientists, which has been shown to be an “alternative fact.

There is nothing wrong with being guided professional judgment as long as it is recognized that professional judgment does not equate to factual certitude.  Michael Lewis in his most recent book, The Undoing Project writes about the work of two psychologists, Amos Tversky and Daniel Kahneman (a Nobel Laureate in economics) whose research completely changed our understanding of human decision making and behavioral economics.  The bottom line is that humans—no matter what their calling and education—are not as rational as they think they are.

Tversky and Kahneman’s insights on professional judgment and the use of subjective probabilities justify a reasonable level of healthy skepticism about the role of professional judgment in shaping the narrative on climate change.

All of us give weight to professionals who advise us or who are considered experts in a particular field.  However, Tversky and Kahneman demonstrate that too much weight and too little skepticism is given to professional judgment, especially on subjects not rich with empirical data.  In a paper on how people make judgments, Subjective Probability: A Judgment of Representativeness, they wrote, “The decision we make, the conclusions we reach, and the explanations we offer are usually based on our judgments of the likelihood of uncertain events … .  In … many other uncertain situations, the mind did not naturally calculate the correct odds.”    Instead, “it replaced the laws of chance with rules of thumb”. When people make judgments they compare the specific topic with a mental model.

They also addressed when the rule of thumb—heuristic—approach leads to “serious miscalculation and concluded it was when the matter involved judgments under uncertainty.  “ People do not appear to follow the calculus of chance or statistical theory of prediction.  Instead, they rely on a limited number of heuristics which sometimes yield reasonable judgments and sometimes lead to severe and systematic error.”

Their work is relevant to climate change  because it is a relatively new field of knowledge and much about how it operates is not well understood. Distinguishing between what is reasonably well known about climate processes and what represents assumptions and best judgments should be more explicit, leading to a greater level of humility and less certitude.

The weakness in this perspective is the implication that scientists who believe that the science is settled and support the climate orthodoxy have the same failing. How could that be?

Many, perhaps most, of the scientists who represented the thinking in the 1980s when climate change was just emerging issue had been active in the environmental movement and advocates of the limits of growth narrative.  Maurice Strong, an environmental activist, and John Houghton, the first chair of the IPCC, were instrumental in recruiting scientists who would support climate alarmism.  Once that core was in place, billions of dollars in funding was directed to the IPCC process and research on the climate system and the threat of it posed.  Climate change became an industry that enriched those who embraced the notion that human activities, burning fossil fuels, was causing a climate catastrophe.

Professor Paul Romer of NYU made an insightful observation about the failure of a scientific field that relies on mathematical modeling.  Although he was referring to macroeconomics, his comments are equally as applicable to climate science. “Because guidance from authority can align the efforts of many researchers, conformity to the facts is no longer needed as a coordinating device. As a result, if facts disconfirm the officially sanctioned theoretical vision, they are subordinated. Eventually, evidence stops being relevant. Progress in the field is judged by the purity of its mathematical theories, as determined by the authorities.”

That represents the Group Think phenomenon, which holds that groups with common interests or purposes seek cohesion and collegiality as the expense of independent thinking and challenge.  Those who do not conform are sanctioned.  In the case of climate change they are labeled skeptics and deniers.

 

Scientific Shenanigans Equals Loss of Credibility

The CEO of the American Academy for the Advancement of Science, Rush Holt, recently said, “scientists are partly to blame for skepticism of evidence in policy making.” He was referring to a “haughty attitude has generated a backlash with the body politic against all types of scientific evidence. That is undoubtedly true but that is a very superficial explanation. The climate change debate is a case in point.

The treatment of science and the scientific process by the climate establishment is clear evidence that scientists who promote the climate orthodoxy do not have what Holt refers to as “reverence for evidence”. Their reverence is to self interest and ideology.

Homan Jenkins of the Wall Street Journal recently exposed the shenanigans that NOAA and NASA have been engaged in reporting annual temperatures. While both organizations reported that 2016 was the warmest year on record, Jenkins pointed out that this was only the case because both agencies have since 2009 left out any reference to measurement uncertainty. When error bars are included 2016 and 2015 are essentially the same as are a number of earlier years starting with 1998.

When the difference between years is less than a tenth of a degree or a few tenths of a degree, the news value of these annual reports goes to zero. Omitting any mention of measurement uncertainly is not only misleading but also a sign of what the author Darrell Huff labeled “How to Lie With Statistics”. The problem of data manipulation is not limited to the reporting of annual temperatures. Dr. John Bates, a Department of Commerce Gold Medal winner and former principal scientist at the National Climatic Data Center exposed a 2015 NOAA study that relied on unverified data and was rushed to publication to discredit the global warming pause for the purpose of influencing negotiations at climate summit in Paris.

NOAA’s statistical chicanery with temperature data is not a case of a few well meaning scientists engaging in trickery to draw public attention to an impending climate catastrophe. It is part of a well organized initiative to promote the sustainable development agenda. In 2009, the famous Climategate scandal exposed how an influential group of scientists were engaged in a conspiracy to discredit so-called skeptics, were manipulating the peer review process for self enrichment and political ends, and distorting statistical data to advance the climate orthodoxy.

Citing the “haughty attitude of scientists as the reason for a loss of credibility is analogous to blaming the sinking of the Titanic on a weak hull. The misuse of the scientific process to exploit climate change and other scientific issues will continue as long as scientists don’t pay a price for using it to pursue political agendas and self enrichment and the leadership of scientific organizations doesn’t demand integrity, openness, and respect for dissenting views

If Mr.Holt wants to improve scientific credibility, he should take the lead in promoting an initiative to establish a higher standard of excellence and transparency for scientific research and research used for policy making. In particular, since peer review has been gamed, as Climategate revealed, the peer review process needs to be reformed. Also, since science is about challenging prevailing hypotheses and theories, dissent must be promoted and protected; not demonized.

The bottom line, is that the scientific establishment needs to do a better job of policing itself and holding scientists and scientific work to the principles set by Richard Feynman and Karl Popper.

There They Go Again

The New York Times announced that according to NASA and NOAA 2016 was the warmest year on record as had been the two preceding years. Every year at this time, the media and environmental advocates make similar announcements about the prior year either being a record setter or one of the hottest on record.

By now, most people just yawn at such announcements because they have been told the same tale for so long that it has lost its meaning, as it rightfully should. Relying on their common sense, they also realize that with the exception of a few very cold or very hot days the temperature most days is what they expect. Common sense trumps the orthodoxy.

Claims of record setting temperatures and dangerous warming make good stories but don’t conform with reality. Satellite temperature measurements show no statistically significant warming since 1998. 2016, like 1998, was an El Nino year, which means it was warmer than non El Nino years. NOAA reports that the 20th century average temperature was 52 degrees F, while the average for the years between 2000 and 2015 have been 53.3 degrees F. Clearly, the 16 years prior to 2016 years were warmer than the century average but so were the 16 years starting in 1930 which was 53.2 degrees, with 1934 was 54.9 degrees, the same as last year. A difference of 0.1 degrees for comparable periods is hardly news worthy.

Differences between recent averages average and the century average is overstated for a couple of important reasons. Newer measuring devices—thermisters—that make continuous measurements measure warmer than older thermometers which took measurements twice a day. So, earlier warming was understated and those prior measurements required adjustments, which may or may not be accurate. In addition, urban and suburban development has created more heat islands than existed for most of the 20th century. While NOAA/NASA make adjustments for those two factors, those adjustments are estimates, not precise corrections.

MIT professor emeritus Richard Lindzen’s reaction to the latest report provides a clear perspective to the annual hand wringing about annual temperature changes: “To imply that a rise of temperature of a tenth of a degree is proof that the world is coming to an end — has to take one back to the dark ages.” …“As long as you can get people excited as to whether it’s a tenth of a degree warmer or cooler, then you don’t have to think, you can assume everyone who is listening to you is an idiot.”