Today I started writing the post I had planned about Matt Ridley's article which I've mentioned wanting to discuss. However, in the process of reading up on it, I came across something I found much more interesting. I was reading an article by Dana Nuccitelli about Ridley's piece when I saw:
Two examples of the sorts of negative changes noted by Professor Myneni recently made news. A new study published in Nature Geoscience by Francisco Estrada, W. J. Botzen, and GWPF advisor Richard Tol found that we can already identify the amplified damages and costs from hurricane landfalls due to human-caused global warming. The authors estimated in the year that Hurricane Katrina struck,
in 2005, US$2 to US$14 billion of the recorded annual losses could be attributable to climate change, 2 to 12% of that year’s normalized losses.
This study contradicts previous papers published by Roger Pielke Jr., which claimed that rising costs of hurricane damages could be accounted for by increased property values along the coast. However, that argument had been criticized by climate scientists like Kevin Trenberth and GWPF contributor Judith Curry, who noted that Pielke’s research did not account for the costs and avoided damages associated with technological improvements like improved building codes and hurricane path forecasting.
This was interesting to me because Richard Tol is something of a darling of the skeptic movement due to his work claiming moderate global warming will be beneficial. Readers will know I am highly critical of this work. Some people have even claimed I am simply biased against Tol due to some sort of personal grudge. That's not true, but I couldn't resist looking at work from a person who became popular by saying (moderate) global warming would be beneficial which said we now know global warming has already caused economic damage via hurricanes.
Unfortunately, the paper is paywalled. Because of that, I haven't gotten to read it. Its Supplementary Information is fairly extenisve and freely available though, so I have read that. In this post, I'd like to discuss some questions I had when I was reading it. I won't say the paper is wrong, but I'm definitely not sold on a number of things the authors did. Maybe discussing them will clear things up.
The first thing to realize when trying to determine whether or not global warming is causing economic damage via hurricanes is we can't just look at how much damage hurricanes have caused over the years. One obvious reason is fifty years ago a dollar was worth quite a bit more than a dollar is worth today. When you're looking at economic damage, you have to account for inflation.
That's not a big issue, but it does raise one question. Some groups and organizations don't try to calculate inflation rates for the United States prior to 1929 due to the Great Depression. Property value prior to huge economic crash leading to the Great Depression would have been hugely overvalued. Can we really trust any economic damage values given based off inflated property values? (Even if we could, economic data this paper relies on prior to ~1925 to calculate inflation is not measured or collected, but merely extrapolated. How reliable is that?)
I don't know. It might not matter. I mostly bring up the inflation issue because correcting for inflation requires "normalizing" the data, what it's called when you try to put all your data on the same scale. Another reason for normalization when looking at hurricane damage is you have consider the fact society has changed a great deal over the centuries. For instance, the amount of people living in areas hurricanes might strike has greatly increased, as has the value of their property. You have to account for that when looking at this problem.
As mentioned in Nuccitelli's article, previous work has suggested properly normalizing the data shows there has been no trend in damage from hurricanes. This new paper which Tol co-authors disagrees. It uses a different, supposedly better, normalization process to account for societal changes and finds there is a climatic signal.
Or at least, that's the idea. I can't vouch that the authors did things correctly so I can't vouch for that conclusion. What I an say, however, is I find some of what they say highly suspect. For instance, they say the earlier work's normalization process can be described as:
ND = D * RWPC * P
Where ND is the normalized damage, D is the measured damage (adjusted for inflation), RWPC is the national real wealth per capita (average wealth per person of the nation) and P is the population. You might wonder why RWPC is specified as average wealth on a national level while P is not. The reason is these values are calculated separately for each county for each hurricane (and for the year of that hurricane).
That's a pretty sensible seeming formulation. To account for changes in society, you multiple the damage of each hurricane by the number of people it affected and the estimated wealth of those people. Tol and his co-authors say you doon't need to do that though. They say:
Table S4 reveals that P is not significantly different from zero at any conventional level while RWPC is only significant at the 10% level. The estimated elasticities indicate that while damages are proportional to RWPC, the damages are far less than proportional to population P and that changes in population have no significant effect on damages.
This is a key difference between the two groups. Tol and his co-authors say the number of people affected by a hurricane has no (significant) effect on its damage. I find that idea rather bizarre myself, but they claim it is justified by both their data and other work:
Analyzing global damage data suggests that urbanization and cities have been developed with effectively provided protection against disasters, indicating that significant adaptation efforts have taken place in such areas4,7,12. Previous studies on global and US hurricane damages using regression models to estimate the elasticities of damages to wealth and population variables report coefficients that are also well below unity4,12.
Now, I would have no problem believing the effect of population on damage from hurricanes is non-linear. I can see why a city growing from 10 million to 11 million people wouldn't see the same increase in damages as a city growing from 1 million to 2 million. I could see going from 1 million to 2 million would not have the same effect as going from 10,000 to 20,000.
But no discernible effect at all? That seems unbelievable to me. I just can't see it. I can't see how Hurricane Katrina would have been as bad as it was if half as many people had been living in Louisiana. I just can't see how a hurricane hitting an area populated with 1,000 people would have the same damage as a hurricane hitting an area with 10,000 people.
I get populated areas may do more to plan and and prepare for disasters than less populated areas, but this idea just seems wrong to me. Maybe there's something I'm missing though. Tol and his co-authors cite a couple papers I haven't read, so maybe there's something I should read. Still, I just can't see how their results are more believable when they say global warming makes hurricanes more damaging - but it doesn't matter how many people are affected by those hurricanes.
Oh well. Maybe I'm missing something, but there's more to this work than that. That seems to be a central issue, but they also say:
It is also important to note that both NHUR, ACE and NE have a trend that can be represented either by a simple time trend or G, suggesting that these variables themselves could contain a warming signal
G is the global temperature record as given by NASA's GISS. ACE is a widely used climate index known as Accumulated Cyclonic Energy. NE is the number of hurricanes listed in a data set for economic damage from hurricanes. That one is a bit Weird. It can only count landfalling hurricanes, and it will necessarily undercount hurricanes in earlier periods due to lower populations on the US coasts in the early 1900s.
That point was even made in the paper it cites as the source for the data set, yet Tol and his co-authors still perform a regression on NE over the 1900-2005 period to try to see if it can find a trend. Why? If they know for a fact a data set is biased low in the early segments, why would they check to see if they can find a rising trend? Of course they will. And since temperatures were lower in the earlier parts of the 1900s, of course they'll find a correlation between warming and hurricanes in the NE data set.
The same point is actually true for the ACE data set. While the paper Tol and his co-authors explicitly state the period before 1940 as a period where the NE data set is biased low, every other data set will certainly be biased low in that period as well. Landfall measurements of hurricanes are actually easier to make, so ocean measurements like ACE will be biased low even later than 1940.
(NHUR will likely be biased low as well, but it isn't actually defined in the paper's SI. It is just said to be "aggregate measurements of the hurricane season activity." I'm guessing that's one of several data sets mentioned earlier in the document, but I don't know which. It's kind of weird. There are a number of minor errors in the document though, so I'm guessing that was just an oversight where the authors failed to spell out the acronym.)
The authors even show their awareness of this issue. A paragraph begins:
Trends in tropical cyclone frequency have been identified in the North Atlantic. Recent analysis indicates that these trends are robust since the 1970s52. However, there is low confidence regarding the robustness of long-term trends in tropical cyclone activity due to doubtful data quality and to the existence of a variety of methods for estimating undercounts of these events in the earlier part of the century53–56
From what I've read, there's actually dispute about that first sentence. Tol and his co-authors cite one source which says trends since the 1970s are robust, but there are other sources which contradict it. What's more interesting, however, is how the paragraph concludes:
In order to test the sensitivity of our results to the dataset used, we present the analysis of NOAA's revised HURDAT dataset in S3.3.1.
HURDAT is a reanalysis project. Like all reanalysis projects, it takes data which has already been measured and, guess what? Re-analyzes it. It can take data from many different data sources and combine them to wind up with more information than anyone one source ever had. What it cannot do is produce information that was never there in the first place.
To put it more simply, HURDAT can't find hurricanes that were never observed in the first place. If there a small hurricane hit some uninhabited part of Florida in 1915 and nobody saw it, the people doing the reanalysis for HURDAT won't be able to magically find it.
So why would Tol and his co-authors say this? They showed they were aware the datasets they used were biased to miss more hurricanes the farther back into the past you go, then they provided a test which uses a data set biased to miss more hurricanes the farther back in time you go. I can't understand that.
In fairness, shortly after they say:
Our results provide evidence for a highly significant time trend in the hurricane landfalling data70,71 (see S3.3.2 for a sensitivity analysis of these results).
And Section 3.3.2 does show some alternative model fits for the NE data set from 1940-2005, but that's only for one of the data sets. I guess it's the most important since its a landfall dataset and landfalling hurricanes are the ones that cause damage, but still, it seems kind of weird.
What's really weird, however, is they only tested the effect the choice of period had on the relationshiop with the number of hurricanes. They didn't test to see what effect the choice of period had on the actual damages caused by hurricanes - you know, the actual subject of their paper. This point is particular noteworthy given this figure in the document:
The graph on the left shows damage from hurricanes as normalized in the earlier work (PL05). The graph on the right shows damage from hurricanes as normalized by Tol and his co-authors. The two are dramatically different. The new normalization method results in normalized damages from hurricanes looking quite small, even if you don't account for the fact the scale of the two graphs are different. Here is a (very) crude rescaling of the graphs to put them on a bit more equal of a scale:
It's obviously not a good image. I'd normally prefer to replot the data, but it doesn't look like the authors have actually posted their data, and I don't care to try to digitize their graphs. Even so, we can already see a clear trend. The farther back in time you go, the greater the difference there is between these two normalization methods.
Given that, it would certainly be interesting to know what happens when you cut off the first ~40% of the data then redo the regressions. Eyeballing it, I'd wager they would still find a correlation between time/warming and hurricane damage, but it'd be dependent entirely upon those three outlying years toward the end of their record.
Which brings me to my point of confusion. Tol and his co-authors say:
According to this estimate the normalized hurricane losses have been increasing by 136 million dollars a year during the past century (i.e., the losses are on average about 14 billion dollars larger in 2005 than if there was no trend). Note that if the destructive hurricane loss year 2005 is excluded, the trend is still significant at the 1% level, but the yearly estimate of losses decreases to about 78 million dollars per year during the 20th century.
The authors openly state their results are cut in half if you remove a single year's data. That seems rather problematic to me. I don't think 50% of your results should be dependent upon less than 1% of the years you look at. Or if they must be so dependent upon so little data, I think you should at least talk about that data. Surely the fact that one year is so important deserves some discussion, no?
And more importantly, why in the world are the authors looking at change in hurricane damage over time? Are they saying global warming has a fixed damage rate per year unrelated to actual changes in temperature? That seems silly. They actually do talk about temperature's effect on hurricane damage, but when they do, they say:
If the linear trend in regression (18) is substituted by G, the corresponding coefficient is significant at the 10% level (t-statistic value of 1.89) and indicates that hurricane losses in the US would increase by about 21 billion dollars per 1ºC increase in global temperatures.
I feel like I must be misunderstanding this. As I read it, the authors are saying there's a correlation between hurricane damage and time that is statistically significant at the 99% level even if you remove the most influential outlier. But even if you include that outlier, the correlation between hurricane damage and temperatures is only significant at the 90% level.
The abstract of their paper says:
We estimate that, in 2005, US$2 to US$14 billion of the recorded annual losses could be attributable to climate change, 2 to 12% of that year’s normalized losses. We suggest that damages from tropical cyclones cannot be dismissed when evaluating the current and future costs of climate change and the expected benefits of mitigation and adaptation strategies.
Which means unless I'm missing something, their headline result is the result based on the relation between time and hurricane damage, not temperatures and hurricane damage!
And their only other headline result is:
Based on records of geophysical data, we identify an upward trend in both the number and intensity of hurricanes in the North Atlantic basin as well as in the number of loss-generating tropical cyclone records in the United States that is consistent with the smoothed global average rise in surface air temperature.
Which appears to be based entirely upon them ignoring the fact early portions of their hurricane data sets are biased due to how much of the planet humans were able to observe in the past.
Maybe someone who has read the paper can provide some insight. I'm hoping I've just misunderstood things, because this paper seems too bizarre otherwise.