Monday, June 30, 2014

Goddard, Watts and data adjustments/On ‘denying’ Hockey Sticks, USHCN data, and all that

Goddard, Watts and data adjustments

Has the recent “discussion” between Steven Goddard and Anthony Watts finally advanced publicity on the questionable data management practices by NOAA?  For a long time the Real Science blog has highlighted the temperature adjustments being made by NOAA in the USHCN final temperature data.  The adjustments include making the past colder while making recent temperatures warmer, resulting in climate change that looks more like the model predicted climate change. The blog proprietor, “Steven Goddard” calls the missing data “fabricated.”  A significant amount of station data are infilled by algorithms and the results of those fabrications seem to uniformly result in higher recent temperatures. Goddard got much less notice than he deserved for his work until it was finally picked up by the news.
Wattsupwiththat picked up the Goddard posts with a series of blog posts “On denying ‘Hockey sticks,’ USHCN data, and all that” Part I and Part 2.  Along with some history of past disputes between Watts and Goddard was a pretty good discussion on methodology. Goddard uses the difference between raw and adjusted data to look at “adjustments.”  Watt’s consensus was that this method had problems.  There was a curious statement in Part 2 that the raw data can’t be used because it is too polluted. (If it’s too polluted for a difference analysis, what can it be used for?)
During this discussion Paul Homewood at Not A Lot of People Know That published “Massive Temperature Adjustments at Luling, Texas,” showing significant temperature adjustments up 1.35°C for 2013 and down (-0.91°C) for 1934.  Homewood also noticed that the USHCN data showed a significant number of E’s for estimated when the Texas Meteorologist’s office had complete data.  This seems to result in a flurry of findings of estimated data showing up in the NOAA database and being shown as complete in other places.  As a minimum, NOAA has a significant data quality problem.
On June 28, Wattsupwiththat, posted “The scientific method at work on USHCN temperature data set” in which he gives Goddard credit for being right about the management of dead stations in the USHCN data set.  Dr. Judith Curry’s blog also had an excellent post on the issue. The comments in these two posts are well worth scanning. It will give you some hint on thinking behind adjustments to the data.  I particularly like Anthony Watts rather repeated comments “When 80% of your network is compromised by bad siting, what makes you think those neighboring stations have any data thats worth a damn? You are adjusting bad data with…drum roll….more bad data.”  Pretty well sums up the problem.
We’re betting a sizable portion of the US economy on a poorly maintained data set with lots of glittering errors in adjustments?


On ‘denying’ Hockey Sticks, USHCN data, and all that – part 1

Part 2 is now online here.
One of the things I am often accused of is “denying” the Mann hockey stick. And, by extension, the Romm Hockey stick that Mann seems to embrace with equal fervor.
While I don’t “deny” these things exist, I do dispute their validity as presented, and I’m not alone in that thinking. As many of you know Steve McIntyre and Ross McKitrick, plus many others have extensively debunked statistics that went into the Mann hockey stick showing where errors were made, or in some cases known and simply ignored because it helped “the cause”.
The problem with hockey stick style graphs is that they are visually compelling, eliciting reactions like whoa, there’s something going on there! Yet, oftentimes when you look at the methodology behind the compelling visual you’ll find things like “Mike’s Nature Trick“. The devil is always in the details, and you often have to dig very deep to find that devil.
Just a little over a month ago, this blog commented on the hockey stick shape in the USHCN data set which you can see here:
2014_USHCN_raw-vs-adjusted
The graph above was generated by” Stephen Goddard” on his blog and it generated quite a bit of excitement and attention.
At first glance it looks like something really dramatic happened to the data, but again when you look at those devilish details you find that the visual is simply an artifact of methodology. Different methods clearly give different results and the”hockey stick” disappears when other methods are used.
USHCN-Adjustments-by-Method-Year
The graph above is courtesy of Zeke Hausfather Who co-wrote that blog entry with me. I should note that Zeke and I are sometimes polar opposites when it comes to the surface temperature record. However, in this case we found a point of agreement. That point was that the methodology gave a false hockey stick.
I wrote then:
While Goddard’s code and plot produced a mathematically correct result, the procedure he chose (#1 The All Absolute Approach) comparing absolute raw USHCN data and absolute finalized USHCN data, was not, and it allowed non-climatic differences between the two datasets, likely caused by missing data (late reports) to create the spike artifact in the first four months of 2014 and somewhat overstated the difference between adjusted and raw temperatures by using absolute temperatures rather than anomalies.
Interestingly, “Goddard” replied and comments with a thank you for helping to find the reason for this hockey stick shaped artifact. He wrote:
stevengoddard says:
http://wattsupwiththat.com/2014/05/10/spiking-temperatures-in-the-ushcn-an-artifact-of-late-data-reporting/#comment-1632952  May 10, 2014 at 7:59 am
Anthony,
Thanks for the explanation of what caused the spike.
The simplest approach of averaging all final minus all raw per year which I took shows the average adjustment per station year. More likely the adjustments should go the other direction due to UHI, which has been measured by the NWS as 8F in Phoenix and 4F in NYC.
Lesson learned. It seemed to me that was the end of the issue. Boy, was I wrong.
A couple of weeks later in e-mail Steven Goddard circulated a new graph with a hockey stick shape which you can see below. He wrote to me and a few others on the mailing list this message:
Here is something interesting. Almost half of USHCN data is now completely fake.
Goddard_screenhunter_236-jun-01-15-54
http://stevengoddard.wordpress.com/2014/06/01/more-than-40-of-ushcn-station-data-is-fabricated/
After reading his blog post I realized he had made a critical error and I wrote back an e-mail the following:
This claim: “More than 40% of USHCN final station data is now generated from stations which have no thermometer data.”
Is utterly bogus.
This kind of unsubstantiated claim is why some skeptics get called conspiracy theorists. If you can’t back it up to show that 40% of the USHCN has stopped reporting, then don’t publish it.
What I was objecting to was the claim if 40% of the USHCN network was missing – something I know from my own studies to be a false claim.
He replied back with a new graph and the strawman argument and a new number:
The data is correct.

Since 1990, USHCN has lost about 30% of their stations, but they still report data for all of them. This graph is a count of valid monthly readings in their final and raw data sets.
Goddard_screenhunter_237-jun-01-16-10
The problem  was, I was not disputing the data, I was disputing the claim that 40% of USHCN stations were missing and had “completely fake” data (his words).  I knew that to be wrong. So I replied with a suggestion.
On Sun, Jun 1, 2014 at 5:13 PM, Anthony  wrote:
I have to leave for the rest of the day, but again I suggest you take this post down, or and the very least remove the title word “fabricated” and replace it with “loss” or something similar.
Not knowing what your method is exactly, I don’t know how you arrived at this, but I can tell you that what you plotted and the word “fabricated” don’t go together they way you envision.
Again, we’ve been working on USHCN for years, we would have noticed if that many stations were missing.
Anthony
Later when I returned, I noted a change had been made to Goddard’s blog post. The word “fabrication” remained but made a small change with no mention of it to the claim about stations. Since I had open a new browser window I had the before and after that change which you can see below:
I thought it was rather disingenuous to make that change without noting it, but I started to dig a little deeper and realized that Goddard was doing the same thing he was before when we pointed out the false hockey stick artifact in the USHCN; he was performing a subtraction on raw versus the final data.
I then knew for certain that his methodology wouldn’t hold up under scrutiny, but beyond doing some more private e-mail discussion trying to dissuade him from continuing down that path, I made no blog post or other writings about it.
Four days later, over at Lucias blog “The Blackboard” Zeke Hausfather took note of the issue and wrote this post about it: How not to calculate temperature
Zeke writes:
The blogger Steven Goddard has been on a tear recently, castigating NCDC for making up “97% of warming since 1990″ by infilling missing data with “fake data”. The reality is much more mundane, and the dramatic findings are nothing other than an artifact of Goddard’s flawed methodology.

Goddard made two major errors in his analysis, which produced results showing a large bias due to infilling that doesn’t really exist. First, he is simply averaging absolute temperatures rather than using anomalies. Absolute temperatures work fine if and only if the composition of the station network remains unchanged over time. If the composition does change, you will often find that stations dropping out will result in climatological biases in the network due to differences in elevation and average temperatures that don’t necessarily reflect any real information on month-to-month or year-to-year variability. Lucia covered this well a few years back with a toy model, so I’d suggest people who are still confused about the subject to consult her spherical cow.
His second error is to not use any form of spatial weighting (e.g. gridding) when combining station records. While the USHCN network is fairly well distributed across the U.S., its not perfectly so, and some areas of the country have considerably more stations than others. Not gridding also can exacerbate the effect of station drop-out when the stations that drop out are not randomly distributed.
The way that NCDC, GISS, Hadley, myself, Nick Stokes, Chad, Tamino, Jeff Id/Roman M, and even Anthony Watts (in Fall et al) all calculate temperatures is by taking station data, translating it into anomalies by subtracting the long-term average for each month from each station (e.g. the 1961-1990 mean), assigning each station to a grid cell, averaging the anomalies of all stations in each gridcell for each month, and averaging all gridcells each month weighted by their respective land area. The details differ a bit between each group/person, but they produce largely the same results.
Now again, I’d like to point out that Zeke and I are often polar opposites when it comes to the surface temperature record but I had to agree with him on this point; the methodology created the artifact. In order to properly produce a national temperature gridding must be employed, using the raw data without gridding will create various artifacts.
Spatial interpolation (gridding) for a national average temperature would be required in a constantly changing dataset, such as GHCN/USHCN, no doubt, gridding is a must. For a guaranteed quality dataset, where stations will be kept in the same exposure, producing reliable data, such as the US Climate Reference Network (USCRN), you could in fact use the raw data as a national average and plot it. Since it is free of the issues that gridding solves, it would be meaningful as long as the stations all report, don’t move, aren’t encroached upon, and don’t change sensors- i.e. the design and production goals of USCRN.
Anomalies aren’t necessarily required, they are an option depending on what you want to present. For example NCDC gives an absolute value for the national average temperature in their State of the Climate report each month, they also give a baseline and the departure anomaly from that baseline for both CONUS and Global temperature.
Now let me qualify that by saying that I have known for a long time that NCDC uses in filling of data from surrounding stations as part of the process of producing a national temperature average. I don’t necessarily agree with their methodology as being perfect, but it is a well-known issue and what Goddard discovered was simply a back door way of pointing out that the method exists. It wasn’t news to me and to many others who have followed the issue.
This is why you haven’t seen other prominent people in the climate debate ( Spencer, Curry, McIntyre, Michaels, McKitrick) and even myself make a big deal out of this hockey stick of data difference that Goddard has been pushing. If this were really an important finding you can bet they and yours truly would be talking about it and providing support and analysis.
It’s also important to note that Goddards graph  does not represent a complete loss of data from these stations. The differencing method that Goddard is using detects every missing data point from every station in the network. This could be as simple as one day of data missing in an entire month, or a string of days, or even an entire month which is rare. Almost every station in the USHCN at one time or another is missing some data. One exception might be the station at Mohonk Lake, New York which has a perfect record due to a dedicated observer, but has other problems related to siting.
If we were to throw out an entire month’s worth of observations because one day out of 31 is missing, chances are we’d have no national temperature average at all. So the method was created to fill in missing data from surrounding stations. In theory and in a perfect world this would be a good method, but as we know the world is a messy place, and so the method introduces some additional uncertainty.
The National Cooperative Observer network a.k.a. co-op is a mishmash of widely different stations and equipment. the co-op network is a much larger set of stations than the USHCN. The USHCN is a subset of the larger co-op network comprising some 8000 stations around the United States. Some are stations in Observer’s backyards, or at their farms, some are at government entities like fire stations and Ranger stations, some are electronic ASOS systems at airports. The vast majority of stations are poorly sited as we have documented using the surface station project, by our count 80% of the USHCN as poorly sited stations.  The real problem is with the micro-site issues of the stations. this is something that is not effectively dealt with in any methodology used by NCDC. We’ll have more on that later but I wanted to point out that no matter which data set you look at (NCDC, GISS, HadCRUT, BEST) the problem of station siting bias remains and is not dealt with. for those who don’t know NCDC provides the source data for the other interpretations of the surface temperature record, so they all have it. More on that later, perhaps in another blog post.
When it was first created the co-op network was done entirely on paper forms called B – 91′s. the observer would write down the daily high and low temperatures along with precipitation for each day of the month and then at the end of the month mail it in. An example B-91 form from Mohonk Lake, NY is shown below:
mohonk_lake_b91_image
Not all forms are so well maintained. Some B-91 forms have missing data, which can be due to the observer missing work, having an illness, or simply being lazy:
Marysville_B91
The form above is missing weekends because the secretary at the fire station doesn’t work on weekends and the firefighters aren’t required to fill in for her. I know this having visited this station and I interviewed the people involved.
So, in such an imperfect “you get what you pay for” world of volunteer observers, you know from the get-go that you are going to have missing data, and so, in order to be able to use any of these at all, a method had to be employed to deal with it, and that was infilling of data. This has been a process done for years, long before Goddard “discovered” it.
There was no nefarious intent here, NOAA/NCDC isn’t purposely trying to “fabricate” data as Goddard claims, they are simply trying to be able to figure out a way to make use of it at all.  The word “fabrication” is the wrong word to use, as it implies the data is being plucked out of thin air. It isn’t – it is being gathered from nearby stations and used to create a reasonable estimate. Over short ranges one can reasonably expect daily weather (temperature at least, precip not so much) to be similar assuming the stations are similarly sited and equipped but that’s where another devil in the details exists.
Back when I started the surfacestations project, I noted one long-period well sited station, Orland was in a small sea of bad stations, and that its temperature diverged markedly from its neighbors, like the horrid Marysville Fire station where the MMTS thermometer was directly next to asphalt:
marysville_badsiting[1]
Orland is one of those stations that reports on paper at the end of the month. Marysville (shown above) reported daily using the touch-tone weathercoder, so its data was available by the end of each day.
What happens in the first runs of the NCDC CONUS temperature process is that they end up with mostly the airports ASOS stations and the weathercoder stations. The weathercoder reporting stations tend to be more urban than rural since a lot of observers don’t want to make long distance phone calls. And so in the case of missing station data on early in the month runs, we tend to get a collection of the poorer sited stations. The FILNET process, designed to “fix” missing data goes to work, and starts infilling data.
A lot of the “good” stations don’t get included in the early runs, because the rural observers often opt for a paper form mailed in rather than the touch-tone weathercoder, and those stations have data infilled from many of the nearby ones, “polluting” the data.
And we have shown back in 2012, those stations have a much lower century scale trend than than the majority of stations in the surface network. In fact, by NOAA’s own siting standards, over 80% of the surface network is producing unacceptable data and that data gets blended in.
Steve McIntyre noted that even in good stations like Orland, the data gets “polluted” by the process:
http://climateaudit.org/2009/06/29/orland-ca-and-the-new-adjustments/
So, imagine this going on for hundreds of stations, perhaps even thousands early on in the month.
To the uninitiated observer, this “revelation” by Goddard could look like NCDC is in fact “fabricating” data. Given the sorts of scandals that have happened recently with government data such as the IRS “loss of e-mails”, the padding of jobs and economic reports, and other issues from the current administration I can see why people would easily embrace the word “fabrication” when looking at NOAA/NCDC data. I get it. Expecting it because much of the rest of the government has issues doesn’t make it true though.
What is really going on is that the FILNET algorithm, design to fix a few stations that might be missing some data in the final analysis is running a wholesale infill on early incomplete data, which NCDC pushes out to their FTP site. The process gets to be less and less as the month goes on, as more data comes in.
But over time, observers have been less inclined to produce reports, and attrition in both the USHCN and and the co-op network is something that I’ve known about for quite some time having spoken with hundreds of observers. Many of the observers are older people and some of the attrition is due to age, infirmity, and death. You can see what I’m speaking of my looking through the quarterly NOAA co-op newsletter seen here: http://www.nws.noaa.gov/om/coop/coop_newsletter.htm
NOAA often has trouble finding new observers to take the place of the ones they have lost, and so, it isn’t a surprise that over time we would see the number missing data points rise. Another factor is technology many observers I spoke with wonder why they still even do the job when we have computers and electronics that can do the job faster. I explained to them that their work is important because automation can never replace the human touch. I always thank them for their work.
The downside is that the USHCN and is a very imperfect and heterogeneous network and will remain so; it isn’t “fixable” at an operational level, so statistical fixes are resorted to. That has both good and bad influences.
The newly commissioned USCRN will solve that with its new data gathering system, some of its first data is now online for the public.
USCRN_avg_temp_Jan2004-April2014
Source: NCDC National Temperature Index time series plotter
Since this is a VERY LONG post, it will be continued…in part 2
In part 2 I’ll talk about things that we disagree on and the things we can find a common ground on.
Part 2 is now online here.


On ‘denying’ Hockey Sticks, USHCN data, and all that – part 2

In part one of this essay which you can see here, I got quite a lot of feedback on both sides of the climate debate. Some people thought that I was spot on with criticisms while others thought I had sold my soul to the devil of climate change. It is an interesting life when I am accused of being in cahoots with both “big oil” and “big climate” at the same time. That aside, in this part of the essay I am going to focus on areas of agreement and disagreement and propose a solution.
In part one of the essay we focus on the methodology that was used that created a hockey stick style graph illustrating missing data. Due to the missing data causing a faulty spike at the end, Steve McIntyre commented, suggesting that it was more like the Marcott hockey stick than it was like Mann’s:
Steve McIntyre says:
Anthony, it looks to me like Goddard’s artifact is almost exactly equivalent in methodology to Marcott’s artifact spike – this is a much more exact comparison than Mann. Marcott’s artifact also arose from data drop-out.
However, rather than conceding the criticism, Marcott et al have failed to issue a corrigendum and their result has been widely cited.
In retrospect, I believe McIntyre is right in making that comparison. Data dropout is the central issue here and when it occurs it can create all sorts of statistical abnormalities.
Despite some spirited claims in comments in part one about how I’m “ignoring the central issue”, I don’t dispute that data is missing from many stations, I never have.
It is something that has been known about for years and is actually expected in the messy data gathering process of volunteer observers, electronic systems that don’t always report, and equipment and or sensor failures. In fact there is likely no weather network in existence that has perfect data without some being missing. Even the new U.S. Climate Reference Network, designed to be state-of-the-art and as perfect as possible has a small amount of missing data due to failures of uplinks or other electronic issues, seen in red:
CRN_missing_data
Source: http://www.ncdc.noaa.gov/crn/newdaychecklist?yyyymmdd=20140101&tref=LST&format=web&sort_by=slv
What is in dispute is the methodology, and the methodology, as McIntyre observed, created a false “hockey stick” shape much like we saw in the Marcott affair:
marcott-A-1000[1]
After McIntyre corrected the methodology used by Marcott, dealing with faulty and missing data, the result looked like this:

alkenone-comparison
McIntyre points out this in comments in part 1:
In Marcott’s case, because he took anomalies at 6000BP and there were only a few modern series, his results were an artifact – a phenomenon that is all too common in Team climate science.
So, clearly, the correction McIntyre applied to Marcott’s data made the result better, i.e. more representative of reality.
That’s the same sort of issue that we saw in Goddard’s plot; data was thinning near the endpoint of the present.
Goddard_screenhunter_236-jun-01-15-54
[ Zeke has more on that here: http://rankexploits.com/musings/2014/how-not-to-calculate-temperatures-part-3/ ]
While I would like nothing better than to be able to use raw surface temperature data in its unadulterated “pure” form to derive a national temperature and to chart the climate history of the United States, (and the world) the fact is that because the national USHCN/co-op network and GHCN is in such bad shape and has become largely heterogeneous that is no longer possible with the raw data set as a whole.
These surface networks have had so many changes over time that the number of stations that have been moved, had their time of observation changed, had equipment changes, maintenance issues,or have been encroached upon by micro site biases and/or UHI using the raw data for all stations on a national scale or even a global scale gives you a result that is no longer representative of the actual measurements, there is simply too much polluted data.
A good example of polluted data can be found in Las Vegas Nevada USHCN station:
LasVegas_average_temps
Here, growth of the city and the population has resulted in a clear and undeniable UHI signal at night gaining 10°F since measurements began. It is studied and acknowledged by the “sustainability” department of the city of Las Vegas, as seen in this document. Dr. Roy Spencer in his blog post called it “the poster child for UHI” and wonders why NOAA’s adjustments haven’t removed this problem. It is a valid and compelling question. But at the same time, if we were to use the raw data from Las Vegas we would know it would have been polluted by the UHI signal, so is it representative in a national or global climate presentation?
LasVegas_lows
The same trend is not visible in the daytime Tmax temperature, in fact it appears there has been a slight downward trend since the late 1930′s and early 1940′s:
LasVegas_highs
Source for data: NOAA/NWS Las Vegas, from
http://www.wrh.noaa.gov/vef/climate/LasVegasClimateBook/index.php
The question then becomes: Would it be okay to use this raw temperature data from Las Vegas without any adjustments to correct for the obvious pollution by UHI?
From my perspective the thermometer at Las Vegas has done its job faithfully. It has recorded what actually occurred as the city has grown. It has no inherent bias, the change in surroundings have biased it. The issue however is when you start using stations like this to search for the posited climate signal from global warming. Since the nighttime temperature increase at Las Vegas is almost an order of magnitude larger than the signal posited to exist from carbon dioxide forcing, that AGW signal would clearly be swamped by the UHI signal. How would you find it? If I were searching for a climate signal and was doing it by examining stations rather than throwing out blind automated adjustments I would most certainly remove Las Vegas from the mix as its raw data is unreliable because it has been badly and likely irreparably polluted by UHI.
Now before you get upset and claim that I don’t want to use raw data or as some call it “untampered” or unadjusted data, let me say nothing could be further from the truth. The raw data represents the actual measurements; anything else that has been adjusted is not fully representative of the measurement reality no matter how well-intentioned, accurate, or detailed those adjustments are.
But, at the same time, how do you separate all the other biases that have not been dealt with (like Las Vegas) so you don’t end up creating national temperature averages with imperfect raw data?
That my friends, is the $64,000 question.
To answer that question, we have a demonstration. Over at the blackboard blog, Zeke has plotted something that I believe demonstrates the problem.
Zeke writes:
There is a very simple way to show that Goddard’s approach can produce bogus outcomes. Lets apply it to the entire world’s land area, instead of just the U.S. using GHCN monthly:
Averaged Absolutes
Egads! It appears that the world’s land has warmed 2C over the past century! Its worse than we thought!
Or we could use spatial weighting and anomalies:

Gridded Anomalies
Now, I wonder which of these is correct? Goddard keeps insisting that its the first, and evil anomalies just serve to manipulate the data to show warming. But so it goes.
Zeke wonders which is “correct”. Is it Goddard’s method of plotting all the “pure” raw data, or is it Zeke’s method of using gridded anomalies?
My answer is: neither of them are absolutely correct.
Why, you ask?
It is because both contain stations like Las Vegas that have been compromised by changes in their environment, that station itself, the sensors, the maintenance, time of observation changes, data loss, etc. In both cases we are plotting data which is a huge mishmash of station biases that have not been dealt with.
NOAA tries to deal with these issues, but their effort falls short. Part of the reason it falls short is that they are trying to keep every bit of data and adjust it in an attempt to make it useful, and to me that is misguided, as some data is just beyond salvage.
In most cases, the cure from NOAA is worse than the disease, which is why we see things like the past being cooled.
Here is another plot from Zeke just for the USHCN, which shows Goddard’s method “Averaged Absolutes” and the NOAA method of “Gridded Anomalies”:
Goddard and NCDC methods 1895-2013
[note: the Excel code I posted was incorrect for this graph, and was for another graph Zeke produced, so it was removed, apologies - Anthony]
Many people claim that the “Gridded Anomalies” method cools the past, and increases the trend, and in this case they’d be right. There is no denying that.
At the same time, there is no denying that the entire CONUS USHCN raw data set contains all sorts of imperfections, biases, UHI, data dropouts and a whole host of problems that remain uncorrected. It is a Catch-22; on one hand the raw data has issues, on the other, at the bare minimum some sort of infilling and gridding is needed to produce a representative signal for the CONUS, but in producing that, new biases and uncertainty is introduced.
There is no magic bullet that always hits the bullseye.
I’ve known and studied this for years, it isn’t a new revelation. The key point here is that both Goddard and Zeke (and by extension BEST and NOAA) are trying to use the ENTIRE USHCN dataset, warts and all, to derive a national average temperature. Neither method produces a totally accurate representation of national temperature average. Keep that thought.
While both methods have flaws, the issue that Goddard raised has one good point, and an important one; the rate of data dropout in USHCN is increasing.
When data gets lost, they infill with other nearby data, and that’s an acceptable procedure, up to a point. The question is, have we reached a point of no confidence in the data because too much has been lost?
John Goetz asked the same question as Goddard in 2008 at Climate Audit:
How much Estimation is too much Estimation?
It is still an open question, and without a good answer yet.
But at the same time we are seeing more and more data loss, Goddard is claiming “fabrication” of lost temperature data in the final product and at the same advocating using the raw surface temperature data for a national average. From my perspective, you can’t argue for both. If the raw data is becoming less reliable due to data loss, how can we use it by itself to reliably produce a national temperature average?
Clearly with the mess the USHCN and GHCN are in, raw data won’t accurately produce a representative result of the true climate change signal of the nation because the raw data is so horribly polluted with so many other biases. There are easily hundreds of stations in the USHCN that have been compromised like Las Vegas has been, making the raw data, as a whole, mostly useless.

So in summary:

Goddard is right to point out that there is increasing data loss in USHCN and it is being increasingly infilled with data from surrounding stations. While this is not a new finding, it is important to keep tabs on. He’s brought it to the forefront again, and for that I thank him.
Goddard is wrong to say we can use all the raw data to reliably produce a national average temperature because the same data is increasingly lossy and is also full of other biases that are not dealt with. [ added: His method allows for biases to enter that are mostly about station composition, and less about infilling see this post from Zeke]
As a side note, claiming “fabrication” in a nefarious way doesn’t help, and generally turns people off to open debate on the issue because the process of infilling missing data wasn’t designed at the beginning to be have any nefarious motive; it was designed to make the monthly data usable when small data dropouts are seen, like we discussed in part 1 and showed the B-91 form with missing data from volunteer data. By claiming “fabrication”, all it does is put up walls, and frankly if we are going to enact any change to how things get done in climate data, new walls won’t help us.

Biases are common in the U.S. surface temperature network

This is why NOAA/NCDC spends so much time applying infills and adjustments; the surface temperature record is a heterogeneous mess. But in my view, this process of trying to save messed up data is misguided, counter-productive, and causes heated arguments (like the one we are experiencing now) over the validity of such infills and adjustments, especially when many of them seem to operate counter-intuitively.
As seen in the map below, there are thousands of temperature stations in the US co-op and USHCN network in the USA, by our surface stations survey, at least 80% of the USHCN is compromised by micro-site issues in some way, and by extension, that large sample size of the USHCN subset of the co-op network we did should translate to the larger network.
USHCN_COOP_Map
When data drops out of USHCN stations, data from nearby neighbor stations is infilled to make up the missing data, but when 80% or more of your network is compromised by micro-site issues, chances are all you are doing is infilling missing data with compromised data. I explained this problem years ago using a water bowl analogy, showing how the true temperature signal gets “muddy” when data from surrounding stations is used to infill missing data:
bowls-USmap
The real problem is the increasing amount of data dropout in USHCN (and in Co-op and GHCN) may be reaching a point where it is adding a majority of biased signal from nearby problematic stations. Imagine a well sited long period station near Las Vegas out in a rural area that has its missing data infilled using Las Vegas data, you know it will be warmer when that happens.

So, what is the solution?

How do we get an accurate surface temperature for the United States (and the world) when the raw data is full of uncorrected biases and the adjusted data does little more than smear those station biases around when infilling occurs? Some of our friends say a barrage of  statistical fixes are all that is needed, but there is also another, simpler, way.
Dr. Eric Steig, at “Real Climate”, in a response to a comment about Zeke Hausfather’s 2013 paper on UHI shows us a way.
Real Climate comment from Eric Steig (response at bottom)
We did something similar (but even simpler) when it was being insinuated that the temperature trends were suspect, back when all those UEA emails were stolen. One only needs about 30 records, globally spaced, to get the global temperature history. This is because there is a spatial scale (roughly a Rossby radius) over which temperatures are going to be highly correlated for fundamental reasons of atmospheric dynamics.
For those who don’t know what the Rossby radius is, see this definition.
Steig claims 30 station records are all that are needed globally. In a comment some years ago (now probably lost in the vastness of the Internet) we heard Dr. Gavin Schmidt said something similar, saying that about “50 stations” would be all that is needed.
[UPDATE: Commenter Johan finds what may be the quote:
I did find this Gavin Schmidt quote:
“Global weather services gather far more data than we need. To get the structure of the monthly or yearly anomalies over the United States, for example, you’d just need a handful of stations, but there are actually some 1,100 of them. You could throw out 50 percent of the station data or more, and you’d get basically the same answers”
http://earthobservatory.nasa.gov/Features/Interviews/schmidt_20100122.php ]
So if that is the case, and one of the most prominent climate researchers on the planet (and his associate) says we need only somewhere between 30-50 stations globally…why is NOAA spending all this time trying to salvage bad data from hundreds if not thousands of stations in the USHCN, and also in the GHCN?
It is a question nobody at NOAA has ever really been able to answer for me. While it is certainly important to keep these records from all these stations for local climate purposes, but why try to keep them in the national and global dataset when Real Climate Scientists say that just a few dozen good stations will do just fine?
There is precedence for this, the U.S. Climate Reference Network, which has just a fraction of the stations in USHCN and the co-op network:
crn_map
NOAA/NCDC is able to derive a national temperature average from these few stations just fine, and without the need for any adjustments whatsoever. In fact they are already publishing it:
USCRN_avg_temp_Jan2004-April2014
If it were me, I’d throw out most of the the USHCN and co-op stations with problematic records rather than try to salvage them with statistical fixes, and instead, try to locate the best stations with long records, no moves, and minimal site biases and use those as the basis for tracking the climate signal. By doing so not only do we eliminate a whole bunch of make work with questionable/uncertain results, and we end all the complaints data falsification and quibbling over whose method really does find the “holy grail of the climate signal” in the US surface temperature record.
Now you know what Evan Jones and I have been painstakingly doing for the last two years since our preliminary siting paper was published here at WUWT and we took heavy criticism for it. We’ve embraced those criticisms and made the paper even better. We learned back then that adjustments account for about half of the surface temperature trend:

We are in the process of bringing our newest findings to publication. Some people might complain we have taken too long. I say we have one chance to get it right, so we’ve been taking extra care to effectively deal with all criticisms from then, and criticisms we have from within our own team. Of course if I had funding like some people get, we could hire people to help move it along faster instead of relying on free time where we can get it.

The way forward:

It is within our grasp to locate and collate stations in the USA and in the world that have as long of an uninterrupted record and freedom from bias as possible and to make that a new climate data subset. I’d propose calling it the the Un-Biased Global Historical Climate Network or UBGHCN. That may or may not be a good name, but you get the idea.
We’ve found at least this many good stations in the USA that meet the criteria of being reliable and without any need for major adjustments of any kind, including the time-of-observation change (TOB), but some do require the cooling bias correction for MMTS conversion, but that is well known and a static value that doesn’t change with time. Chances are, a similar set of 50 stations could be located in the world. The challenge is metadata, some of which is non-existent publicly, but with crowd sourcing such a project might be do-able, and then we could fulfill Gavin Schmidt and Eric Steig’s vision of a much simpler set of climate stations.
Wouldn’t it be great to have a simpler and known reliable set of stations rather than this mishmash which goes through the statistical blender every month? NOAA could take the lead on this, chances are they won’t. I believe it is possible to do independent of them, and it is a place where climate skeptics can make a powerful contribution which would be far more productive than the arguments over adjustments and data dropout.

Friday, June 27, 2014

Here is a Novel Campaign Tactic

The robot takeover has begun, at least according to Timothy Ray Murray, who lost the election for Oklahoma’s 3rd district (obtaining just 5.2 percent of the vote) to incumbent Frank Lucas (R). Murray is now planning to challenge the election results, on the grounds that his opponent has been replaced by a robot body double.


Murray’s Web site notes that “The election for U.S. House for Oklahoma’s 3rd District will be contested by the Candidate, Timothy Ray Murray. I will be stating that his votes are switched with Rep. Lucas votes, because it is widely known Rep. Frank D. Lucas is no longer alive and has been displayed by a look alike. Rep. Lucas’ look alike was depicted as sentenced on a white stage in southern Ukraine on or about Jan. 11, 2011.”


(Lucas told news channel KFOR that he had never in fact been to Ukraine. He also observed: “Many things have been said about me, said to me during course of my campaigns. This is the first time I’ve ever been accused of being a body double or a robot.”)


Murray’s Web site goes on to reassure us that: “I, Timothy Ray Murray, am a human, born in Oklahoma, and obtained and continue to fully meet the requirements to serve as U.S. Representative when honored to so.  I will never use a look alike…” This makes me wonder. “I’m definitely a human,” like “I am very suave and good at talking to people,” is the sort of statement that seems untrue the instant you say it.

Thursday, June 26, 2014

Easy Money

Christopher Keating is offering a $10,000 prize to anyone over 18 years of age who can prove that global warming is not occurring.  Here is your chance!


http://news.yahoo.com/physicist-10-000-anyone-prove-climate-change-isnt-132433555.html

Wednesday, June 25, 2014

Search my cell phone? Show me your warrant!

Washington (CNN) -- The Supreme Court on Wednesday unanimously ruled that police may not search the cell phones of criminal suspects upon arrest without a warrant -- a sweeping endorsement for privacy rights.
By a 9-0 vote, the justices said smart phones and other electronic devices were not in the same category as wallets, briefcases, and vehicles -- all currently subject to limited initial examination by law enforcement.

Here Is The Reason For The Total Collapse In Q1 GDP/MAGIC NUMBERS

Here Is The Reason For The Total Collapse In Q1 GDP

Tyler Durden's picture




 
Remember back in April, when the first GDP estimate was released (a gargantuan by comparison 0.1% hence revised to a depression equivalent -2.9%), we wrote: "If It Wasn't For Obamacare, Q1 GDP Would Be Negative." Well, now that GDP is not only negative, but the worst it has been in five years, we are once again proven right. But not only because GDP was indeed negative, but because the real reason for today's epic collapse in GDP was, you guessed it, Obamacare.
Here is the chart we posted in April, showing the contribution of Obamacare, aka Healthcare Services spending. It was, in a word, an all time high.


Turns out this number was based on.... nothing.
Because as the next chart below shows, between the second and final revision of Q1 GDP something dramatic happened: instead of contributing $40 billion to real GDP in Q1, Obamacare magically ended up subtracting $6.4 billion from GDP. This, in turn, resulted in a collapse in Personal Consumption Expenditures as a percentage of GDP to just 0.7%, the lowest since 2009!

Don't worry thought: this is actually great news! Because the brilliant propaganda minds at the Dept of Commerce figured out something banks also realized with the stub "kitchen sink" quarter in November 2008. Namely, since Q1 is a total loss in GDP terms, let's just remove Obamacare spending as a contributor to Q1 GDP and just shove it in Q2.
Stated otherwise, some $40 billion in PCE that was supposed to boost Q1 GDP will now be added to Q2-Q4.
And now, we all await as the US department of truth says, with a straight face, that in Q2 the US GDP "grew" by over 5% (no really: you'll see).

Tuesday, June 24, 2014

The reality deniers.

Reality deniers get away with many of their fibs because we live in an age in which we have been taught that most statements of fact are really conditional, and there is always a possibility that some outrageous statement could be true or something that most people believe to be true could be false. The Obama administration manages to sell to the gullible media that their proposals to mitigate global warming by spending trillions of dollars and destroying millions of jobs over the next few decades are necessary for our survival as a species. The administration, though, admits that if the president manages to implement all of his new rules and laws, the world would only be two-tenths of 1 degree Fahrenheit cooler in a hundred years than if we did nothing. There is no real benefit to forcing all of those coal miners in West Virginia and elsewhere to lose their jobs.
When asked, reality deniers cannot tell you what the optimum amount of carbon dioxide in the atmosphere should be (currently, it is about 0.04 percent). We do know that more carbon dioxide causes plants to grow faster, and that is why they pump the gas into greenhouses. We also know that the world is getting greener. Studies have shown that world vegetation increased somewhere between 11 percent and 14 percent in the years 1982 to 2011, which means that food is more plentiful and that there is more habitat for wildlife. The southern Sahara (the Sahel) and the Amazon are getting greener, not drier (as many radical environmentalists have claimed). This is just one of the side benefits of a little more carbon dioxide. By the way, the Earth has not warmed in the past 17 years, as the reality deniers and their models claimed it would. The oceans have been rising since the end of the last ice age. About 20 years ago when better satellite technology became available, it appeared that the oceans were rising at a bit faster rate of about one foot per century, yet that rate has slowed for the last nine years.
In the coming years, we face many problems that are far more likely to kill people than global warming — unsustainable government debt and global terrorism — but the reality deniers prefer to waste resources on a potential threat a century from now than to deal with the immediate threats to our well-being.
Reality deniers seem to think we can make people wealthier just by mandating things like a high minimum wage. Such folks ignore the fact that the least skilled will not be hired at all, and prices will be higher on many goods and services, which is most painful for low-income people. The fact is the minimum wage is cruel and liberty-destroying for many people, by denying them the opportunity to work for less for various reasons, including developing job skills.
Every time I see a politician demanding that the government spend more for this or that, without cutting some other program by an equal or greater amount, I know they are reality deniers, because numerous studies show that the U.S. government (and that of most other countries) spend far more than the optimum amount for economic growth and job creation. That is, all the additional spending will only make things worse, not better.
Those who advocate higher taxes, or more regulations or more big-government bureaucracies like Veterans Affairs or the Internal Revenue Service (IRS) are reality deniers, because they ignore all of evidence of the last 2,500 years as to why big government makes us worse off rather than better off.
The reason the political class is so filled with reality deniers is all too many voters are also reality deniers. Out of ignorance or the inability to think beyond Stage One, they think they can get something for nothing and, hence, are willing to vote for those who promise fantasyland.
At some point, many reality deniers seem to start believing their own lies, that lying doesn’t matter or the public is too stupid to notice they are lying. The missing IRS emails are a case in point. Certainly, there is a very, very small probability that the critical emails of those who just happened to be under congressional investigation were all destroyed in a computer crash and only theirs were not backed up.

So which graph is the truth?




 From http://www.skepticalscience.com

The data (green) are the average of the NASA GISS, NOAA NCDC, and HadCRUT4 monthly global surface temperature anomaly datasets from January 1970 through November 2012, with linear trends for the short time periods Jan 1970 to Oct 1977, Apr 1977 to Dec 1986, Sep 1987 to Nov 1996, Jun 1997 to Dec 2002, and Nov 2002 to Nov 2012 (blue), and also showing the far more reliable linear trend for the full time period (red).






  


From https://stevengoddard.wordpress.com/



https://stevengoddard.files.wordpress.com/2014/06/screenhunter_627-jun-22-21-18.gif


Right after the year 2000, NASA and NOAA dramatically altered US climate history, making the past much colder and the present much warmer. The animation below shows how NASA cooled 1934 and warmed 1998, to make 1998 the hottest year in US history instead of 1934. This alteration turned a long term cooling trend since 1930 into a warming trend.

 


So, which graph is the truth? How do we tell? Do the "adjustments" mentioned in Rick's article play a part in the overall thinking on the subject? Does modern climate "modeling" and "alterations" play a positive or negative role? Does the 2+ degree increase measured at Mohunk Preserve hold true for a national or global conversation? Or does the less than 1/2+ degree change shown in the graph above guide us going forward?

Comments are welcome. After all climate change is a 29 Billion dollar a year tax payer supported industry. Are we getting the truth? Are we getting our money's worth?




Monday, June 23, 2014

Summers Used To Be Much Hotter In The US

Summers Used To Be Much Hotter In The US

One year ago this week, President Barack Obama waited for a hot day in Washington and hurriedly held an outdoor speech, where he could perspire, engage in third rate acting,  and blame global warming.
ScreenHunter_629 Jun. 23 07.08
The president might be surprised to learn that temperatures are normally hot in the summer. The graph below shows the annual percentage of US June temperatures above 100 degrees through the 21st of the month.
Prior to 1954, 100 degree June days were much more common, and this June has been below the 1895-2014 mean.
ScreenHunter_628 Jun. 23 07.02
For the entire summer, the frequency of 100 degree days has dropped dramatically since the 1930s.
ScreenHunter_403 Jun. 10 04.43 


Using Thermometer Data Is Now Considering Data Tampering

I have been accused several times in the last 24 hours of data tampering and cherry picking, because I use the the entire unaltered, measured HCN data set.
Only altered, fabricated data which reverses the measured US cooling trend is considered acceptable by climate alarmists.



Summer Is A Thing Of The Past In Norway




ScreenHunter_626 Jun. 22 19.05 


NOAA And NASA Data Alterations Are Global

I’ve shown how NASA and NOAA have dramatically altered the US temperature record, and they are doing the same thing in other countries, like Iceland and Australia – almost always cooling the past, which creates the appearance of warming.
Iceland

Vestmannaeyja
Australia




AliceSprings 


NOAA/NASA Dramatically Altered US Temperatures After The Year 2000

Prior to the year 2000, NASA showed US temperatures cooling since the 1930′s, and 1934 much warmer than 1998.
ScreenHunter_627 Jun. 22 21.18
NASA GISS: Science Briefs: Whither U.S. Climate?
NASA’s top climatologist said that the US had been cooling
Whither U.S. Climate?
By James Hansen, Reto Ruedy, Jay Glascoe and Makiko Sato — August 1999
Empirical evidence does not lend much support to the notion that climate is headed precipitately toward more extreme heat and drought.
in the U.S. there has been little temperature change in the past 50 years, the time of rapidly increasing greenhouse gases — in fact, there was a slight cooling throughout much of the country
NASA GISS: Science Briefs: Whither U.S. Climate?
NOAA and CRU also reported no warming in the US during the century prior to 1989.
February 04, 1989
Last week, scientists from the United States Commerce Department’s National Oceanic and Atmospheric Administration said that a study of temperature readings for the contiguous 48 states over the last century showed there had been no significant change in average temperature over that period. Dr. (Phil) Jones said in a telephone interview today that his own results for the 48 states agreed with those findings.
New York Times
Right after the year 2000, NASA and NOAA dramatically altered US climate history, making the past much colder and the present much warmer. The animation below shows how NASA cooled 1934 and warmed 1998, to make 1998 the hottest year in US history instead of 1934. This alteration turned a long term cooling trend since 1930 into a warming trend.

Fig.D.gif (525×438)
But NASA and NOAA have a little problem. The EPA still shows that heatwaves during the 1930s were by far the worst in US temperature record.
ScreenHunter_556 Jun. 20 05.14high-low-temps-figure1-2014
Heat waves in the 1930s remain the most severe heat waves in the U.S. historical record (see Figure 1).
High and Low Temperatures | Climate Change | US EPA
George Orwell explained how this worked.
“He who controls the past controls the future. He who controls the present controls the past.”
― George Orwell, 1984

Sunday, June 22, 2014

Congressional Platoon

There are 535 voting members  in Congress.  Let's suit them up, give them some guns and send them over to Iraq. Let them finish what they started.

Wednesday, June 18, 2014

Something on health care

Found this article in the New England Journal of Medicine http://www.nejm.org/doi/full/10.1056/NEJMp1106090  It's a pretty interesting look at one real life example of what happened when going from a single payer system to a "competition based" model. Since being in school, I've come across more articles and tidbits of information and we are no where near a reasonable solution, which is no surprise to anyone. One of the better quotes I've heard is that a dollar of spending on health care is a dollar of income to somebody. This is self evident of course, but the reality is that when we talk about cost savings, cutting costs or managing costs, we are indirectly saying that we want SOMEONE to make less money. One suggestion being offered is that if people have only a limited amount of dollars to spend, insurance companies will "compete" for those dollars and this will magically make health care cheaper. The reality of course is that all it does is create a middle man who denies payment (manages costs) or denies coverage (lowers demand for service and hence, payment).

No matter what system is in place, health care is going to cost more every year. Period. New drugs, new procedures, new knowledge develops all the time and when they are deployed, they cost more money. Competition, as described above, will simply dictate that doctors and hospitals make less money while they deliver more and more complex care. Have we seen this before with competition? Of course we have. While the productivity of the American worker increased quite a bit over several decades, their wages did not. And I'm sure that most here know, despite the vast level of money that we spend, our outcomes are not really comparable to countries who spend less.

Admittedly, I am in favor of a single payer system. Not because I favor socialism or any other stupid ideology, but because after dealing with insurance companies from within health care, I feel like they are just leeches who do nothing but shuffle money around and suck out their share while doing absolutely nothing to improve the system or the lives of the insured. That said, I am also in favor of having our fat assed nation stop eating such shitty food and start taking  better care of themselves. We could legitimately spend trillions of dollars less on health care if people consumed less care by attempting to be more healthy. For quite awhile, we are going to remain locked in a philosophical battle about this rather than engaged in a discussion about what is the best path to finding a solution that a majority of people in this country want. Obama care is a bad plan, but what people like Ryan are offering is simply not what people like seniors want.

Hottest May on record.




NASA


May was really, really hot.


How hot was it?


It was so hot that two leading trackers of global temperatures determined that there’s never been a hotter May on record. NASA found the average temp around the world last month was 1.38 degrees Fahrenheit hotter than the long-term average.


And these guys have been tracking temperatures for 135 years.


The Japanese Meteorological Agency came up with similar results and found that March and April also combined to make for the most scorching spring on record.


There’s another closely watched report coming from the NOAA this week, and it typically falls in line with the other two, according to Mashable.


By all indications, 2014 is shaping up to compete for the hottest year we’ve ever seen, especially with the prospects of even warmer weather patterns on the way.


The European Centre for Medium-range Weather Forecasts is predicting that there’s a 90% chance of an El NiƱo occurring this year. If that happens, look for more hurricanes in the Pacific and more temperature records to fall.




Shawn Langlois