There is No Intelligent Life Out There

I'm not convinced there is intelligent life here either. I always have to question the idea of human intelligence whenever I hear about the Drake Equation. You've probably heard of this equation before, but if not, Wikipedia describes it:

The Drake equation is a probabilistic argument used to arrive at an estimate of the number of active, communicative extraterrestrial civilizations in the Milky Way galaxy.[1][2] The number of such civilizations, N, is assumed to be equal to the mathematical product of (i) the average rate of star formation, R*, in our galaxy, (ii) the fraction of formed stars, fp, that have planets, (iii) the average number of planets per star, ne, that can potentially support life, (iv) the fraction of those planets, fl, that actually develop life, (v) the fraction of planets bearing life on which intelligent, civilized life, fi, has developed, (vi) the fraction of these civilizations that have developed communications, fc, i.e., technologies that release detectable signs into space, and (vii) the length of time, L, over which such civilizations release detectable signals...

Now, there is nothing inherently wrong with this. If you want to know how likely something is, you can express that by multiplying out the odds of all necessary factors. That's how probability works.

The problem with the Drake Equation isn't the equation itself, but rather, how people misuse the equation to support all sorts of pseudoscientific nonsense. It's particularly bad as scientists are often the ones who do it, and what they do is almost always the same thing - make some key assumption their results will depend upon and gloss over it like it is nothing. Today, I've found the best example I've seen of this yet.

Before I get to the that, I should give credit to the blogger Anders who wrote a post about this paper (or rather, about a blog post referencing this paper). I wouldn't have seen this paper except I happened to glance at his blog to see if he was discussing anything interesting and saw this during my quick skim:

The article made some interesting points. Current exoplanet statistics suggests that it’s extremely unlikely that we are the only technologically advanced civilisation to have ever developed.

Now, I didn't find the rest of Anders's post interesting. I didn't find the blog post it was about interesting either. This claim is the only thing that caught my eye. It's such an incredible claim I wanted to know what sort of basis there was for it. It turns out the answer is, "None." It all turns on a remarkable example of begging the question.

(As a quick aside, I should point out "begging the question" is not the same as "raising the question." Raising a question would mean bringing it up. Begging a question is a logical fallacy in which one assumes something in order to prove it is true. A reader recently reminded me people often misuse this phrase so I thought I'd make sure we all know what it means.)

The trick to the paper Anders relies upon for his claim comes in an incredibly simple form. It's easy to miss though as the authors of the paper modify Drake's Equation in a few largely unimportant ways. First, rather than look at the probability intelligent life that we could communicate with exists right this moment, it looks at the probability (non-human) technology-inventing intelligent life has existed at all. This simplifies the equation a bit, and it is represented with the variable A.

The next change the authors make is to simplify the factors by grouping them. Factors which involve planets and the odds of them being habitable are grouped into one variable (Nast). Factors which involve the origination of life, its evolution into an intelligent form and the odds of that intelligent life developing technology are grouped into a second variable (fbt). This gives us the equation:

A = Nastfbt

The authors discuss how new data allows us to get better estimates for Nast, improving our ability to come up with an estimate for A. This isn't an impressive development though as the factors that variable include have never been great sources of uncertainty for the Drake Equation. How many planets are habitable is far less uncertain than how likely a planet is to develoo intelligent life. The authors recognize that by saying it:

is extremely uncertain, basically because (a) we have no theory to guide any estimates, and (b) we have only one known example of the occurrence and history of life, intelligence and technology. We Leave fbt as simply statistically unknown at this time and examine the consequence of it taking on various values depending on one's pessimism or optimism.

That's right. The authors admit to having no idea what one half of their equation is. Despite that, they draw a variety of conclusions, with one author writing in an article:

There is no reason we can't take the same approach with the astrobiology of the Anthropocene. Earlier this year, Woody and I used the amazing exo-planet data (and some very simple reasoning) to set an empirical limit on the probability that we are the only time in cosmic history that an advanced civilization evolved. It turns out the probability is pretty low — one in 10 billion trillion. In other words, one can argue that the odds are very good that we're not the first time this — meaning an energy intensive civilization — has occurred. With that idea in hand, you can take a theoretical jump and ask a simple question: How likely is it that other young civilizations like our own have run into the kind of sustainability crisis we face today?

That's complete nonsense though. The "empirical limit" they come up with is nothing more than, "We assume there's this particular limit, therefore there's this particular limit." We can see this by examining how they came up with their estimates. If they don't know two of the three variable (A or fbt), how could they come up with any estimate? They explain:

To address our question, A is set to a conservative value ensuring that Earth is the only location in the history of the cosmos where a technological civilization has ever evolved. Adopting A = 0.01 means that in a statistical sense were we to rerun the history of the Universe 100 times, only once would a lone technological species occur. A lower bound fbt on the probability is then...

That's right. This equation was created to describe the probability intelligent, technology creating life would come to exist in our universe. In examining it, the authors set that value to 1%. Having arbitrarily decided what the probability is, they then plug it into their equation and generate some numbers.

But why did they choose 1%? The authors offer no explanation for that value. All they say on the issue is the value is "a conservative value." Says who? Why should anyone think the probability of life like that of humans coming into existence is 1%? Why is 1% any better than 10% or .0001%?

It isn't. The authors had no basis for that assumption. They could have just as easily assumed any other value and gotten entirely different results. They could have found the odds other life like ours has existed at some point in the universe are greater than they say by changing 1% to 10%. The opposite is true as well. If we assume human life is an incredible fluke which only had a 0.0000000000000000000000000000000000000000000000000000001% chance of happening, then the numbers they came up with would shrink dramatically.

It is embarrassing this got published. The entire paper is nothing more than, "If we assume the odds of something happening are X without any basis or explanation, then the odds of something else happening are Y! Aren't we smart!?" It's the sort of a high school student would be slammed for doing in an essay. That scientists would do it is sad, and that they could get it past peer-reviewers is just embarrassing. It's a disgrace to science as a whole.

There are a variety of points that could be made here, particularly with comparisons to climate science since Anders is mostly known as a climate-blogger and he seems to support this sort of work. I'm not interested in thinking through those though. Instead, I'm just going to close this out with a simple observation.

Nobody knows how big the universe is. We often talk about the "universe" in reference to what we can see, but the reality is the visible universe we're aware of is not the entire universe. We have no idea how far the full universe might extend. The visible universe may comprise a large portion of the full universe, or it might just be an infinitesimal speck in the middle of something larger than we could hope to comprehend.

The authors of this paper list their estimate for the number of stars in the universe, and they take note of the fact the size of the region being considered will affect any probability estimates. What they never make clear is since we have basically no idea how large the universe is, we cannot possibly hope to estimate the odds of other intelligent life existing or having existed in it.

All we can do is say in our (small?) corner of the universe, there are no signs of intelligent life.

6 comments

  1. "A" in this thing isn't the probability that intelligent life comes into existence, rather it's the expected number of occurrences of intelligent life. Given that A is in fact at least one, for any region which includes Earth, it seems reasonable to describe setting it to 0.01 as "conservative".

    They gloss this choice as saying something like "Run the universe from the start & in 1% of the runs you'll get an occurrence of intelligence", which seems to me to confuse the issue. Why not just set it to one? Wouldn't change any of the conclusions as far as I can see. But I'm a stats moron so probably missing something.

  2. Michael Crichton described the Drake equation as "pure speculation in quasi-scientific trappings."

    It is beyond me how the authors believe they have arrived at "a firm lower bound on the probability that one or more additional technological species have evolved anywhere and at any time in the history of the observable Universe." Their only facts are that there a lot of "Goldilocks" planets (N_ast) in the Universe -- by their estimates, 4*10^21 -- and at least one technological civilization (ours). That's all. So unless the chances of the development of technology on a random planet is very rare, around 1/N_ast or less, there will likely have developed more than one technological civilization. All well and good, but it's not what their summary claims.

  3. Szilard, you are right that an expected value is not the same as a probability, and I shouldn't have glossed over the distinction. It just doesn't affect the point I was making. The choice of what expected value to use is just as arbitrary. Moreover, when the expected value is small (like in this paper), the expected value and the probability value will be basically the same. That's why the authors could describe 0.01 in probabilistic terms without there being any confusion. Interestingly, that does make an unstated assumption - that the existence of such life in the universe is independent of other instances. In theory, that assumption could be false.

    As for setting A to 0.01, you yourself said A is an expected value. You can't assume just because something happened it must have been the expected outcome. We have no idea how likely it was for us humans (or any similar life) to develop anywhere in the universe. It could have been a one in a million chance as opposed to a one in a hundred. We cannot know what to set A to because we have no idea how much of a fluke it might be that life like ours came to exist. Heck, one could make an argument A should be 0 if they believe intelligent life cannot arise purely by natural means.*

    As for the assumption changing their results, any change in A changes their results, proportionally to the change in A. If you multiply A by 100, you change their results just the same as if you had multiplied them by 100. In this formulation, A is divided by Nast. That means the more unlikely it was for us humans to come into existence as we are, the more unlikely it is for any similar life to exist "out there."

    *We don't even know what consciousness is in any measurable sense or have any meaningful explanation of how it could be tied to any physical source. As such, assuming it could come about purely from natural means is an assumption built purely upon the faith natural forces can explain everything. A person could choose to reject that assumption, believing in any number of other possibilities. For instance, if a deity imbued life with "souls" and that gave rise to consciousness, A could be 0 absent divine intervention. Alternatively, it could be that humans aren't even intelligent, with our perception of consciousness being an illusion akin to what a computer might "think" while running programs it has no control over. Again, A would be 0.

  4. It used to be that one only encountered such fantasy in the Journal of Irreproducible Results. I characterize this sort of paper as a "Ham Sandwich Speculation" -- along the lines of "If I had some ham I could make a ham sandwich if I had some bread." Mark Twain wrote a well-known passage in Life on the Mississippi about scientist getting a whole lot of mileage out of a very small amount of fact so nothing is new except the volume of such drivel is greater.

  5. Brandon: Fair point. This actually touches on something I’ve been interested in for ages. I think there is a very strong cognitive tendency towards a type of invalid inference, and one of things I’m interested in is the extent to which this tendency “infects” reasoning in other areas.

    Anyway, this discussion re-stoked my interest in the issue & induced me to write the long piece below, which I now inflict on your blog. It repeats in a slightly different way many of your points, and is perhaps trivial and/or wrong. But nevertheless …

    Say event x has property F. p(F) says nothing by itself about p(x), except that p(x) is less than or equal to p(F).

    Say another event y has property G. If p(F) is less than p(G) this tells you nothing by itself about the relationship between p(x) and p(y).

    But there is often a cognitive tendency to infer p(x) less than p(y) where p(F) less than p(G).

    Eg: Consider deals in the game of bridge – 13 cards dealt to each of 4 players.

    Consider two stories, each of which contain an exact description of a bridge deal. The stories tell you exactly which cards each player was dealt.

    In the deal in the first story, call it deal x, each player gets cards from each suit; nothing special.

    On the other hand, deal y, the deal from the second story, gives each player all the cards from one suit. So player 1 gets all the spades, player 2 all the hearts etc.

    Of course, x and y and every other bridge deal have exactly the same (minute) probability. But I assert that almost everybody will find story 1 far more plausible than story 2. I personally would feel a very strong tendency towards this assessment, for example.

    I think the implicit (invalid) inference being made is something like this: Deals like x are far more probable than deals like y. Therefore x is far more likely than y.

    Why does this happen? Presumably it’s along these lines: Deal x isn’t “special”; we take it as kind of a proxy for a large class of deals. Replacing x with another deal of the “same kind” wouldn’t change the story in any material way. The story itself is kind of a proxy for a large number of similar stories, and collectively they weigh against the much more “special” story 2. And so on.

    Obviously this all depends on which properties we view as interesting and important. Because it is a card game, we care about suits and card values. Of course, there’s nothing in the cards themselves which defines these properties as interesting or important; it is a construct we impose, not something inherent in the situation.

    And of course, there exist a multitude of other properties which would give different outcomes were we to focus on them. Trivially, there is the property of not being deal x. Looked at in this way, deal x becomes hugely special, while deal y becomes just another member of the overwhelmingly likely class. Story 1 becomes highly implausible; story 2 is a yawn.

    This isn’t a property we would naturally focus on, but that’s just us. The cards themselves are just cards; they don’t inherently select one division into properties over another.

    I think this bridge deal example is interesting because (a) just about everybody will make a plausibility assessment which doesn’t correspond to probability, and (b) there’s no good way as far as I can see to explain this as anything but an *error*.

    Turning to the intelligent-life thing. I don’t think there is any basis for estimating a value (or a bound) for the expected number of occurrences of intelligence in the universe based on an assumption that Earth intelligence is unique. (But then again I am a stats moron so would not be surprised if I am wrong about this. Anyway …)

    I don’t know how you would go about doing conceptual “runs” to generate possible universes & I have no idea what the distribution of outcomes might look like. But consider some scenarios.

    Let’s say that the expected number of occurrences of intelligence in the universe is one trillion. Let’s say this arises as follows:
    - 1E100 possible universes, all distinct.
    - Each has the same probability – 1E-100.
    - One of them – call it x – has exactly one occurrence of intelligence
    - All the rest have a trillion occurrences

    The probability that our universe is x is vanishingly small, and so the probability that we are the unique occurrence of intelligence is also vanishingly small.

    But of course the probability that our universe is any other specific possible universe y is also vanishingly small, no more likely than that we are x.

    The probability of a single intelligence-occurrence is vastly less than the probability of a trillion occurrences, but it is not the case that p(x) is less than p(y).

    Say we are actually y. A fortiori, y has some unique property not shared by any other possible universe. It is vastly unlikely that our universe should have this unique property – but whatever universe we are, it will have some such property. If we are x, the unique property will include being the only universe with one occurrence of intelligence. If we are some other universe y, it will be something else.

    We can choose to consider the intelligence-occurrence property more important and interesting than universe y’s unique property, but that’s just us. There’s nothing inherent in the situation which makes one more important and interesting than the other.

    Now let’s say that expected number of occurrences of intelligence is very small – 1E-100 – and we get this from 1E100 equally-likely possible universes, as before, but now all except x have zero occurrences.
    The situation is the same: vastly improbable that we are x, but just as vastly improbable that we are any other specified y. Every possible universe necessarily has some unique property & nothing makes this property inherently less important or interesting than occurrence of intelligence.

    I think this is analogous to the bridge hand example. Single-occurrence of intelligence might be hugely improbable, but that wouldn’t necessarily mean that a specific universe exhibiting it is less probable than a specific universe which doesn’t; and every universe will have some property which is similarly improbable.

    So the fact that there is at least one occurrence of intelligence doesn’t seem to give us any useful information about the expected number of occurrences – it could be huge or it could be almost zero.

    Of course, the distribution of outcomes from possible-universe “runs” might not look like these scenarios. For example, perhaps there are only two possible universes, call them x and y, with different probabilities.

    Say that x has a probability of 1E-100 and has a single occurrence of intelligence. Say y has a trillion occurrences, and probability 1 - 1E-100. As before, the expected number of occurrences is one trillion (less a tiny fraction) and it is hugely improbable that we are the single-occurrence universe x. But in this scenario it’s almost certain that we are in fact y.

    I think the error (to the extent there is one) is in conflating these scenarios: treating the y from the first scenario as a kind of proxy for all the other y’s which have the same property we are interested in, and turning the scenario into something like the second, where all of these y’s combine into a single exemplar.

    Anyway, I think this stuff is interesting.

  6. Szilard, that's quite a bit of text. I'm not bothered by it, but it may take me a bit to digest it. My first impression is I largely agree, though there is at least one caveat. In your example of bridge hands, it may often be reasonable to assume something is abnormal about "unusual" hands because of why the hand is being discussed.

    That caveat aside, on the central issue:

    So the fact that there is at least one occurrence of intelligence doesn’t seem to give us any useful information about the expected number of occurrences – it could be huge or it could be almost zero.

    We are in complete agreement. Especially since we have no idea how sentience might have come to exist. Philosophically speaking, there's no particular reason to think sentience is a natural occurrence. I get many people won't care about that and will just assume sentience came about naturally, but that's an assumption you'd need to state up front when attempting to do any probability calculations.

Leave a Reply

Your email address will not be published. Required fields are marked *