Arctic ice melt could trigger uncontrollable climate change at global level

  • Thread starter Thread starter lynnvinc
  • Start date Start date
Status
Not open for further replies.
Because people like yourself exaggerate to a ridiculous degree.

When I, with no statistical training or advanced math degree, can look at charts and see the flaw, there is a problem.
When I, with little science training can see that the claims made exceed the science available, there is a problem.
When I am accused of murder, there is a problem.

One lie too many.
And now my first response is a strong disagreement.
That’s your take on the situation.

I prefer to hope for the best & expect the worst, & follow prudence – just in case I may be harming others. I take the words of the real climate scientists. So I do what I can.

You can just ignore the issue & all that I’ve said – that’s your prerogative.

I’ve done my duty by telling what I know, to the extent of my knowledge.
 
There are two problems with your analysis.
  1. Historical temperature measurements have more resolution than you imagine. The technology has gradually improved over the years, but a very accurate sensor is based on an electrical RTD device, which today has accuracy to about 0.03 C, and resolution limited only by the resolution of the measurement of resistance, which can be quite precise using a bridge circuit. Remember that for the purposes of development temperate change data over time, resolution is more important than accuracy. As long as changes over time can be measured, It is worthwhile noting the the RTD effect was discovered in 1821. And resistive bridge circuits go back further than that.
  2. For the purposes of measuring changes in temperature, the precision of an average is greater than the precision of the individual data points. This can be illustrated by a hypothetical example. Suppose there are 1,000 temperature sensors deployed at monitoring stations, and suppose each sensor is read to a resolution of 1.0 C. It does not matter if the method is rounding to the nearest whole number or truncating any fractional part. Further suppose that the actual temperature is 22.000 C. And suppose that due to errors in the sensors, we get the following readings:
21 C reported by 200 stations
22 C reported by 300 stations
23 C reported by 500 stations

The average of all 1,000 stations would be 22.300 C. Not accurate, but precise. Now suppose the actual temperature increases by 0.01 C. It is likely that we will get the following readings:

21 C reported by 196 stations
22 C reported by 298 stations
23 C reported by 506 stations

which produces an average of 22.31 C. It is still not accurate. But it does precisely indicate the increase of 0.01 C from 22.30 to 22.31

So the average can give you more decimal places than the individual readings have. Your grade-school analysis of the problem is insufficient.

In the actual instrumental temperature record, we do not have this ideal case, of course. We have various temperatures and we have stations with various amounts of calibration error. But with intelligent processing and correlation of the raw data, along with other independent indicators of temperature, it is possible to develop a temperature record the the resolution indicated on the graphs Lynn has been showing. You have not discovered something that all the scientists working with this data have overlooked.
yes, that’s the issue – the differences of means, not the means themselves, as long as the data & means is “reliable,” it doesn’t have to be as “valid.”

An example is a watch one forgets to set back one hour in Fall – if they are timing the baking of a cake, it won’t make any difference if the watch’s time is not valid, only that is reliable.

This also speaks to the issue of adjusted data. In that case they’ve developed better methods for making the data more valid (reflecting reality) and have introduced that. However, the data of the past didn’t get that treatment, so to make previous data comparable to data after that method was introduced – to make the differences reliable – they have to apply that same method to the previous data…sometimes making that previous data look like it higher than it was, sometimes lower
 
Could people please not post giant graphics? It makes the thread very difficult to read.

Thanks. 🙂
 
I usually avoid these threads because so many people don’t know of what they speak. Then along comes a cartoonist with a comment I would like to share with the avid posters.

blog.dilbert.com/post/155073242136/the-climate-science-challenge

“there has never been a multi-year, multi-variable, complicated model of any type that predicted anything with useful accuracy. Case in point: The experts and their models said Trump had no realistic chance of winning.”
“… experts and their models said Trump had no realistic chance of winning.”
That statement, of course, is demonstrably false…
 
OK, I have double checked it. My math in post #308 is correct. Either apologize for calling it incorrect or demonstrate how it is incorrect.
Your claim is that you can average in a higher degree of accuracy then obtained initially.

That is incorrect.

But go ahead and double down on it.
 
Your claim is that you can average in a higher degree of accuracy then obtained initially.

That is incorrect.

But go ahead and double down on it.
In a way this argument is irrelevant in the long term, since temps are expected to go up by degrees, not just tenth of degrees. If we pass a 2C increase, which could happen before mid-century (they are even expecting a 3C increase by then), CC may take on a life of its own, with positive feedbacks causing methane release from melting hydrates & permafrost. Then it is expected the global average temp by 2100 could go to a 4C to 6C increase – at which point life on earth would grossly suffer & food productivity would do into a nose dive.

So ultimately it is the whole degrees that will matter most.

Since we won’t be around then, I guess it doesn’t matter much. But I can’t stop being concerned about future generations.
 
In a way this argument is irrelevant in the long term, since temps are expected to go up by degrees, not just tenth of degrees. If we pass a 2C increase, which could happen before mid-century (they are even expecting a 3C increase by then), CC may take on a life of its own, with positive feedbacks causing methane release from melting hydrates & permafrost. Then it is expected the global average temp by 2100 could go to a 4C to 6C increase – at which point life on earth would grossly suffer & food productivity would do into a nose dive.

So ultimately it is the whole degrees that will matter most.

Since we won’t be around then, I guess it doesn’t matter much. But I can’t stop being concerned about future generations.
here’s another impact from melting permafrost:

Russian Cities Might Collapse By 2050 Due To Climate Change – Here’s How

“Climate change is an undeniable reality. Now, even mighty cities of great empires are in danger of crumbling. The latest victims of thawing permafrost are Vladimir Putin’s eastern cities. The ground used to feel solid, and it was, until the permafrost starting to melt…” at morningledger.com/russian-cities-might-collapse-2050-due-climate-change-heres/13135138/
 
If you don’t believe me, maybe you will believe this authority.
Or this one,
or this one.
Look at the " Accuracy and Precusion" section of this reference.

They all agree with me that the mean has more precision.
I think your link supported him
mean and standard deviation: round to one more decimal place than your original data. If your original data were mixed, round to one decimal place more than the least precise.
Thus changes of a thousandth of a degree are not included, like with the claims of “hottest year ever”.
 
I think your link supported him

Thus changes of a thousandth of a degree are not included, like with the claims of “hottest year ever”.
Vz71 claimed the average is no more precise that the components. That is contradicted even by the reference you cited. The rule of thumb you cited is just a rule of thumb. More specific formulas are given by one of my other references. In the case of global averages we have hundreds of stations to average, and each station is sampled hundreds of times. So in a given year we might have a million measurements. That could easily give precision to 0.001 degrees. The scientists doing this know statistics. They would not overlook something that was grade school math. And this particular limitation vz71 claims is just not true.
 
Actually, you are the one who linked to this rule of thumb and it specifically says the average is no more precise than your** least precise measurement.** The stations do not measure down to 0.001 degrees of accuracy.
Vz71 claimed the average is no more precise that the components. That is contradicted even by the reference you cited. The rule of thumb you cited is just a rule of thumb. More specific formulas are given by one of my other references. In the case of global averages we have hundreds of stations to average, and each station is sampled hundreds of times. So in a given year we might have a million measurements. That could easily give precision to 0.001 degrees. The scientists doing this know statistics. They would not overlook something that was grade school math. And this particular limitation vz71 claims is just not true.
 
Actually, you are the one who linked to this rule of thumb and it specifically says the average is no more precise than your** least precise measurement.** The stations do not measure down to 0.001 degrees of accuracy.
Read that reference again. It says “one more digit than the least precise measurement”. Then read the other references I cited, Get the whole picture before trying to support the very extraordinary claim that scientists have been publishing meaningless data.
 
I think your link supported him

Thus changes of a thousandth of a degree are not included, like with the claims of “hottest year ever”.
When I look at the temp anomaly (change in means) charts they are in the tenths, not thousandths.

And they show that the warming to date above pre-industrial is .8C, so it is pushing 1C rise, perhaps within a few years or so.

So does that mean you all here at CAF will then accept that there is global warming when it reaches 1C increase, which is a whole number?
 
Basic grade school math.
Significant digits refers to the accuracy in a calculation being no greater then the least accurate number involved.
I have an idea. How about creating a chart that rounds all the temps to whole numbers. Here’s the chart in tenths. In your mind round them all off to the nearest whole numbers, then we will get a chart in which all the results before 1995 are 0C, and all the results after 2000 are 1C, with the intervening years (1995, 1996, 1997, 1998, 1999, 2000) fluctuating between 0C and 1C.

 
Actually, you are the one who linked to this rule of thumb and it specifically says the average is no more precise than your** least precise measurement.** The stations do not measure down to 0.001 degrees of accuracy.
The discussion confuses fundamental concepts.

First, we should not be discussing accuracy, but precision. Second, in considering the non-systematic errors in measurement that determine precision, there are two situations that are being conflated:
  1. Suppose that you are attempting to determine the value of a quantity, z, from independent measurements of the independent variables, say, w,x,y, on which z depends. The precision with which one can determine the z from measurements of w,x,y, is limited by the precision in the measurements of w,x,y. The squared uncertainty in z incorporates the squared uncertainty of each independent variable multiplied by the square of the partial derivative of z with respect to that independent variable. In the most simple case, but not in all cases, this relationship means that this means that the precision is z is limited by the precision of the least precisely measured independent variable. In lower division student labs, taken before propagation of errors is studied in detail, the most simple case leads to the simple rule of thumb given in the Penn State link: “report your value with the same number of significant figures as the value with the smallest number of significant figures”. Note that in the later discussion of propagation of uncertainties, this rule of thumb does not immediately apply.
  2. Suppose you are making direct measurements of some particular quantity. The best estimate of the true value of the measured quantity is given by the mean of the measurements, and the precision of the determination of the mean is given by the precision of the individual measurement, divided by the square root of the number of measurements. This result, by the way, follows directly from case 1. The precision can be, in principle, infinitely better than the precision of the individual measurements. If the uncertainties in the individual measurements being averaged are not all the same, the result is not as simple, but the principle that a large number of measurements can give rise to precision greater than the least precise individual measurement still holds.
 
Status
Not open for further replies.
Back
Top