LeafByNiggle
Well-known member
Was it incorrect in principle, or just in arithmetic in the illustration? If so, what is it? The principle that averages have less mean error than the components is sound.Incorrect.
I suggest you review your math.
Was it incorrect in principle, or just in arithmetic in the illustration? If so, what is it? The principle that averages have less mean error than the components is sound.Incorrect.
I suggest you review your math.
That’s your take on the situation.Because people like yourself exaggerate to a ridiculous degree.
When I, with no statistical training or advanced math degree, can look at charts and see the flaw, there is a problem.
When I, with little science training can see that the claims made exceed the science available, there is a problem.
When I am accused of murder, there is a problem.
One lie too many.
And now my first response is a strong disagreement.
yes, that’s the issue – the differences of means, not the means themselves, as long as the data & means is “reliable,” it doesn’t have to be as “valid.”There are two problems with your analysis.
21 C reported by 200 stations
- Historical temperature measurements have more resolution than you imagine. The technology has gradually improved over the years, but a very accurate sensor is based on an electrical RTD device, which today has accuracy to about 0.03 C, and resolution limited only by the resolution of the measurement of resistance, which can be quite precise using a bridge circuit. Remember that for the purposes of development temperate change data over time, resolution is more important than accuracy. As long as changes over time can be measured, It is worthwhile noting the the RTD effect was discovered in 1821. And resistive bridge circuits go back further than that.
- For the purposes of measuring changes in temperature, the precision of an average is greater than the precision of the individual data points. This can be illustrated by a hypothetical example. Suppose there are 1,000 temperature sensors deployed at monitoring stations, and suppose each sensor is read to a resolution of 1.0 C. It does not matter if the method is rounding to the nearest whole number or truncating any fractional part. Further suppose that the actual temperature is 22.000 C. And suppose that due to errors in the sensors, we get the following readings:
22 C reported by 300 stations
23 C reported by 500 stations
The average of all 1,000 stations would be 22.300 C. Not accurate, but precise. Now suppose the actual temperature increases by 0.01 C. It is likely that we will get the following readings:
21 C reported by 196 stations
22 C reported by 298 stations
23 C reported by 506 stations
which produces an average of 22.31 C. It is still not accurate. But it does precisely indicate the increase of 0.01 C from 22.30 to 22.31
So the average can give you more decimal places than the individual readings have. Your grade-school analysis of the problem is insufficient.
In the actual instrumental temperature record, we do not have this ideal case, of course. We have various temperatures and we have stations with various amounts of calibration error. But with intelligent processing and correlation of the raw data, along with other independent indicators of temperature, it is possible to develop a temperature record the the resolution indicated on the graphs Lynn has been showing. You have not discovered something that all the scientists working with this data have overlooked.
Thanks for this post, which fully reveals the seriousness with which your analysis should be taken.Incorrect.
I suggest you review your math.
Thanks for these great links.If you want to understand the data adjustments, here are some helpful links:
ncdc.noaa.gov/monitoring-references/faq/temperature-monitoring.php
data.giss.nasa.gov/gistemp/faq/#q208
“… experts and their models said Trump had no realistic chance of winning.”I usually avoid these threads because so many people don’t know of what they speak. Then along comes a cartoonist with a comment I would like to share with the avid posters.
blog.dilbert.com/post/155073242136/the-climate-science-challenge
“there has never been a multi-year, multi-variable, complicated model of any type that predicted anything with useful accuracy. Case in point: The experts and their models said Trump had no realistic chance of winning.”
OK, I have double checked it. My math in post #308 is correct. Either apologize for calling it incorrect or demonstrate how it is incorrect.Incorrect.
I suggest you review your math.
Your claim is that you can average in a higher degree of accuracy then obtained initially.OK, I have double checked it. My math in post #308 is correct. Either apologize for calling it incorrect or demonstrate how it is incorrect.
In a way this argument is irrelevant in the long term, since temps are expected to go up by degrees, not just tenth of degrees. If we pass a 2C increase, which could happen before mid-century (they are even expecting a 3C increase by then), CC may take on a life of its own, with positive feedbacks causing methane release from melting hydrates & permafrost. Then it is expected the global average temp by 2100 could go to a 4C to 6C increase – at which point life on earth would grossly suffer & food productivity would do into a nose dive.Your claim is that you can average in a higher degree of accuracy then obtained initially.
That is incorrect.
But go ahead and double down on it.
here’s another impact from melting permafrost:In a way this argument is irrelevant in the long term, since temps are expected to go up by degrees, not just tenth of degrees. If we pass a 2C increase, which could happen before mid-century (they are even expecting a 3C increase by then), CC may take on a life of its own, with positive feedbacks causing methane release from melting hydrates & permafrost. Then it is expected the global average temp by 2100 could go to a 4C to 6C increase – at which point life on earth would grossly suffer & food productivity would do into a nose dive.
So ultimately it is the whole degrees that will matter most.
Since we won’t be around then, I guess it doesn’t matter much. But I can’t stop being concerned about future generations.
If you don’t believe me, maybe you will believe this authority.Your claim is that you can average in a higher degree of accuracy then obtained initially.
That is incorrect.
But go ahead and double down on it.
I think your link supported himIf you don’t believe me, maybe you will believe this authority.
Or this one,
or this one.
Look at the " Accuracy and Precusion" section of this reference.
They all agree with me that the mean has more precision.
Thus changes of a thousandth of a degree are not included, like with the claims of “hottest year ever”.mean and standard deviation: round to one more decimal place than your original data. If your original data were mixed, round to one decimal place more than the least precise.
Vz71 claimed the average is no more precise that the components. That is contradicted even by the reference you cited. The rule of thumb you cited is just a rule of thumb. More specific formulas are given by one of my other references. In the case of global averages we have hundreds of stations to average, and each station is sampled hundreds of times. So in a given year we might have a million measurements. That could easily give precision to 0.001 degrees. The scientists doing this know statistics. They would not overlook something that was grade school math. And this particular limitation vz71 claims is just not true.I think your link supported him
Thus changes of a thousandth of a degree are not included, like with the claims of “hottest year ever”.
Vz71 claimed the average is no more precise that the components. That is contradicted even by the reference you cited. The rule of thumb you cited is just a rule of thumb. More specific formulas are given by one of my other references. In the case of global averages we have hundreds of stations to average, and each station is sampled hundreds of times. So in a given year we might have a million measurements. That could easily give precision to 0.001 degrees. The scientists doing this know statistics. They would not overlook something that was grade school math. And this particular limitation vz71 claims is just not true.
Read that reference again. It says “one more digit than the least precise measurement”. Then read the other references I cited, Get the whole picture before trying to support the very extraordinary claim that scientists have been publishing meaningless data.Actually, you are the one who linked to this rule of thumb and it specifically says the average is no more precise than your** least precise measurement.** The stations do not measure down to 0.001 degrees of accuracy.
When I look at the temp anomaly (change in means) charts they are in the tenths, not thousandths.I think your link supported him
Thus changes of a thousandth of a degree are not included, like with the claims of “hottest year ever”.
I have an idea. How about creating a chart that rounds all the temps to whole numbers. Here’s the chart in tenths. In your mind round them all off to the nearest whole numbers, then we will get a chart in which all the results before 1995 are 0C, and all the results after 2000 are 1C, with the intervening years (1995, 1996, 1997, 1998, 1999, 2000) fluctuating between 0C and 1C.Basic grade school math.
Significant digits refers to the accuracy in a calculation being no greater then the least accurate number involved.
The discussion confuses fundamental concepts.Actually, you are the one who linked to this rule of thumb and it specifically says the average is no more precise than your** least precise measurement.** The stations do not measure down to 0.001 degrees of accuracy.