Climate Change: The Facts - Part Two

My last post was about a book I recently read, Climate Change: The Facts. The post discusses the first five chapters of the book, explaining a bit of why they are terrible and should be an embarrassment to anyone involved with the book. One such person, Mark Steyn, has referenced my post a couple times (most recently here) in a derisive way without actually saying I got anything wrong. One of his readers stopped by to fill in that gap, but as hilarious as his comment was, it didn't actually have any substance. (Personally, I'm hoping it was an awesome case of satire.)

This sort of approach to discussions has always bothered me, and it's given me the motivation to discuss more of the book. Also encouraging was my discovery the book has not been officially launched. According to Mark Steyn:

Next week, I'll be out and about promoting the official Earth Day release in North America of Climate Change: The Facts. We've been shipping out personally autographed copies for a couple of weeks as a SteynOnline exclusive, but starting next week you'll be able to get the paperback out in the wider world, too. It's already available in eBook format via Kindle, Nook or Kobo, so, wherever you are on the planet, you can be reading it in the next 90 seconds. But, as I said, the official launch is next week for Earth Day, so I'll be venturing onto the airwaves - as will my co-author Christopher Essex - and we'll keep you apprised of which shows and when.

I find it a little weird this book has been read and discussed for over two months now yet hasn't officially been launched, but at least that means any commentary I have will be contemporary.

Rather than try to create a balanced view by selecting good chapters to contrast to the bad ones I covered in my last post, I'm going to just pick up where I left off - Chapter 6. Chapter 6 is much more narrowly focused than the previous chapters, and to a large extent, that means I can't judge its accuracy. I don't know the details and nuances of climate modeling and weather forecasts in Australia. I'm skeptical of some things said in it, and the references leave something to be desired, but nothing really jumped out at me until I read:

Arguably, the best forecasts of ENSO come not from GCMs or simple statistical models, but from artificial neural networks (ANNs).8 ANNs are massive, parallel-distributed, information processing systems with characteristics resembling the biological neural networks of the human brain. They are a form of aritifical intelligence and represent state of the art statistical modeling. In contrast to GCMs that attempt to simulate and understand climate from first principles, ANNs simply mine historical data for patterns.

I've long been interested in neural networks. That's why I knew this description was way off. Neural networks are basically just machines which can "learn." They do not need to be "massive" or "parallel-distributed." All you need is a program which runs calculations, then updates parameters for the calculations by examining the results of the earlier runs. You could manage that with twenty lines of code and run it on an cheap computer from 1990. And the resulting ANN might be suitable for any number of things, not just mining historical data to try to forecast ENSO.

I imagine the authors of the chapter had specific ANNs in mind for this description, and the description may be accurate for those specific ANNs. Even if so, it's embarrassing they'd give such an obviously inaccurate description of ANNs. Another thing they say may not be embarrassing, but it is definitely awkward:

Output from ANNs and GCMs can be easily and objectively measured using root mean square error (RMSE). This number simply adds together the difference between observed and forecast sea surface temperatures or rainfalls with the bigger the number the worse the forecast. So it's easy to show in an objective way that ANNs can provide a much better medium term ENSO and rainfall forecast. The difficulty has been in generating interest in this approach and interest in the potential of ANNs to revolutionize climate science.

Read that paragraph a few times if you didn't catch what the authors did. The authors say ANNs and GCMs can be compared in an objective manner, but they don't actually do so. They make no effort to determine whether ANNs or GCMs are more accurate. Despite this, they say "it's easy to show" ANNs can provide better forecasts than GCMs.

Well, sure. In theory, ANNs could do better than GCMs. In theory, GCMs could do better than ANNs. In theory, blindfolded monkeys banging on keyboards could do better than GCMs or ANNs. That doesn't tell us anything. That we have a way to compare the results of two approaches does not somehow translate into one approach being better. It certainly doesn't indicate one approach could "revolutionize climate science."

These problems obviously don't mean ANNs are useless for climate modeling. The authors may even be right in saying ANNs could revolutionize climate science. I'm skeptical though. The efficacy of neural networks are routinely overstated, especially by people who don't understand them well (just look at any prediction of the AI singularity arriving). I suspect the authors of this chapter have fallen into the same trap. This is troubling as the rest of the chapter makes it clear they wrote the chapter to promote their own work on the subject. If you're going to write a chapter in a book claiming to provide "The Facts" to promote work you are doing, you should probably understand the basics of the subject you're discussing. If you get those wrong, why would anyone trust what you have to say on other topics?

I wrote a lot more about this chapter than I expected to. I had planned to cover several chapters in this post. I think it works out though. Neural networks are a subject I know fairly well. I've even considered writing a couple posts about them in the past to explain why most people fearing a robot apocalypse are being silly. That makes it a good topic for me to focus on. The authors of a chapter in this book considered it important, and I am capable of judging what they have to say on it based on personal knowledge and experience. When I use my personal knowledge on and experience with neural networks, I find it sounds like the authors have no idea what they are talking about.

I may not know much about weather forecasting/climate projections in Australia, but if the authors have no idea what they're talking about on one subject they feel is important, I have to be skeptical of what they say on other subjects. I also have to be skeptical of the quality of the book which failed to catch such a poor description in the review process. Even if the authors don't really know what neural networks are, a reviewer could have easily hidden this by making some relatively small changes to the chapter.

And that might be the worst part of this book. Being wrong is one thing. Being wrong in obvious ways is a very different thing. If you're going to publish nonsense and call it "facts," you should at least put some effort into bamboozling me.

Just so you know, I've been sick for the last week or so, and I haven't had the energy to do much online. I've gotten over it, but I still don't have much energy. If you noticed I've been fairly inactive, that's why. Hopefully it won't be a problem from here on.