9

Schrödinger’s Gray Goo

 3 years ago
source link: https://bpodgursky.com/2020/04/11/schrodingers-gray-goo/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

Scott Alexander’s review of The Precipice prompted me to commit to keyboard an idea I play with in my head: (1) The biggest risks to our humanity the ones we can’t observe, because they are too catastrophic to survive, and (2) we do ourselves a disservice by focusing on preventing the catastrophes we have observed.

Disclaimer: I Am Not A Physicist, and I’m especially not your physicist.

1. Missing bullet holes

The classic parable of survivorship bias comes from the Royal Air Force during WWII.  The story has been recounted many times

Back during World War II, the RAF lost a lot of planes to German anti-aircraft fire. So they decided to armor them up. But where to put the armor? The obvious answer was to look at planes that returned from missions, count up all the bullet holes in various places, and then put extra armor in the areas that attracted the most fire.

Obvious but wrong. As Hungarian-born mathematician Abraham Wald explained at the time, if a plane makes it back safely even though it has, say, a bunch of bullet holes in its wings, it means that bullet holes in the wings aren’t very dangerous. What you really want to do is armor up the areas that, on average, don’t have any bullet holes.

Why? Because planes with bullet holes in those places never made it back. That’s why you don’t see any bullet holes there on the ones that do return.

1e4i3eeR6xdDqsqPY1jhcAnjEcpCfKU5muFpzxEPC2pK_S-5lRzbclyLEmilhmUHjsWFdjp7RWl6QAtbL87gWqviKGsfglvWz3T6d_UTfCe-V9O-BXBqwRlRl6dgxEWlKSJAdNcQ

The wings and fuselage look like high-risk areas, on account of being full of bullet holes.  They are not. The engines and cockpit only appear unscathed because they are the weakest link.  

2. Quantum interpretations

The thought-experiment of Schrödinger’s cat explores possible interpretations of quantum theory:

The cat is penned up in a steel chamber, along with the following device: In a Geiger counter, there is a tiny bit of radioactive substance, so small, that perhaps in the course of the hour one of the atoms decays, but also, with equal probability, perhaps none; if it happens, the counter tube discharges and through a relay releases a hammer that shatters a small flask of hydrocyanic acid. If one has left this entire system to itself for an hour, one would say that the cat still lives if meanwhile no atom has decayed. The first atomic decay would have poisoned it. 

Quantum theory posits that we cannot predict individual atomic decay; the decay is an unknowable quantum event, until observed.  The Copenhagen interpretation of quantum physics declares that the cat’s state is collapsed when the chamber is opened — until then, the cat remains both alive and dead.

The many-worlds interpretation declares the opposite — that instead, the universe bifurcates into universes where the particle did not decay (and thus the cat survives)  and those where it did (and thus the cat is dead).

The many-worlds interpretation (MWI) is an interpretation of quantum mechanics that asserts that the universal wavefunction is objectively real, and that there is no wavefunction collapse. This implies that all possible outcomes of quantum measurements are physically realized in some “world” or universe.

The many-worlds interpretation implies that there is a very large—perhaps infinite—number of universes. It is one of many multiverse hypotheses in physics and philosophy. MWI views time as a many-branched tree, wherein every possible quantum outcome is realised. 

0zGc59qbbnB2F_eCW4dNYRmVH2jjruZIp0IlgCZu5z3kkGtgFgPej9_sHqOt2XRZ1ub4sM6xMrm0gxpxuefog2CpM4k90nzohLbNy_15m9DxFUwsizFWopPzvpSRq9ugm7luFO25

3. The view from inside the box

The quantum suicide thought-experiment imagines Schrödinger’s experiment from the point of view of the cat.  

By the many-worlds interpretation, in one universe (well, several universes) the cat survives.  In the others, it does. But a cat never observes universes in which it dies. Any cat that walked out of the box, were it a cat prone to self-reflection, would comment upon its profound luck. 

No matter how likely the particle was to decay — even if the outcome was rigged 100 to 1 — the outcome remains the same.  The cat walks out of the box grateful to its good fortune.

4. Our box

Or perhaps most dangerously, the cat may conclude that since the atom went so long without decaying, even though all the experts predicted decay, the experts must have used poor models which overestimated the inherent existential risk.

Humans do not internalize observability bias.  It is not a natural concept. We only observe the worlds in which we — as humans — exist to observe the present.  Definitionally, no “humanity-ending threat” has ended humanity.   

My question is: How many extinction-level threats have we avoided not through calculated restraint and precautions (lowering the odds of disaster), but through observability bias?

The space of futures where nanobots are invented is (likely) highly bimodal; if self-replicating nanobots are possible at all, they will (likely) prove a revolutionary leap over biological life.  Thus the “gray goo” existential threat posited by some futurists:

Gray goo (also spelled grey goo) is a hypothetical global catastrophic scenario involving molecular nanotechnology in which out-of-control self-replicating machines consume all biomass on Earth while building more of themselves

If self-replicating nanobots strictly dominate biological life, we won’t spend long experiencing a gray goo apocalypse.  The reduction of earth into soup would take days, not centuries:

Imagine such a replicator floating in a bottle of chemicals, making copies of itself…the first replicator assembles a copy in one thousand seconds, the two replicators then build two more in the next thousand seconds, the four build another four, and the eight build another eight. At the end of ten hours, there are not thirty-six new replicators, but over 68 billion. In less than a day, they would weigh a ton; in less than two days, they would outweigh the Earth

Imagine a world in which an antiBill Gates stands with a vial of grey goo in one hand, and in the other a geiger counter pointed at an oxygen-14 molecule — “Schrödinger’s gray goo”.  Our antiBill commits to releasing the gray goo the second the oxygen-14 molecule decays and triggers the geiger counter.

In the Copenhagen interpretation, there’s a resolution.  The earth continues to exist for a minute (oxygen-14 has a half-life of 1.1 minutes), perhaps ten minutes, but sooner or later the atom decays, and the earth is transformed into molecular soup, a giant paperclip, or something far stupider.  This is observed from afar by the one true universe, or perhaps by nobody at all.   No human exists to observe what comes next. [curtains]

In the many-worlds interpretation, no human timeline survives in which the oxygen-14 model decays. antiBill stands eternal vigil over that oxygen-14 atom: the only atom in the universe for which the standard law of half-life decay does not apply.

5. Our world

As a species we focus on preventing and averting (to the extent that we avert anything), the risks we are familiar with:

  • Pandemics
  • War (traditional, bloody)
  • Recessions and depressions
  • Natural disasters — volcanoes, earthquakes, hurricanes 

These are all bad.  As a civilization, we occasionally invest money and time to mitigate the next natural disaster, pandemic, or recession.

But we can agree that while some of these are civilizational risks, none of them are truly species-level risks.  Yet we ignore AI and nanotechnology risks, and to a lesser but real degree, we ignore the threat of nuclear war.  Why though?

  • Nuclear war seems pretty risky
  • Rogue AI seems potentially pretty bad
  • Nanobots and grey goo (to the people who think about this kind of thing) seem awful

The reasoning (to the extent that reasoning is ever given) is: “Well, those seem plausible, but we haven’t seen any real danger yet.  Nobody has died, and we’ve never even had a serious incident”

We do see bullet holes labeled “pandemic”, “earthquake”, “war”, and we reasonably conclude that if we got hit once, we could get hit again.  Even if individual bullet holes in the “recession” wing are survivable, the cost to human suffering is immense, and worth fixing.  Enough recession/bullets may even take down our civilization/plane. 

But maybe we are missing the big risks, because they are too big.  Perhaps there exist fleetingly few timelines with a “minor grey goo incident” which atomizes a million unlucky people.  Perhaps there are no “minor nuclear wars”, “annoying nanobots” or “benevolent general AIs”. Once those problems manifest, we cease to be observers.

Maybe these are our missing bullet holes.

hfx4jtUrYHZkNGSbMtmqoFSNRkQrPhJROIYp_t7YJlZ6_Nr9of0PyV594nf1qcRwEm2cEMSsBqceDZKiEs6Lq3CUD4La9ahgZvhTEiInoHnTMGiyK35GOSjrDNdTuTKm0_tSs7vT

6. So, what?

If this theory makes any sense whatsoever — which is not a given — the obvious followup is that we should make a serious effort to evaluate the probability of Risky Things happening, without requiring priors from historical outcomes. Ex:

  • Calculate the actual odds — given what we know of the fundamentals — that we will in the near-term stumble upon self-replicating nanotechnology
  • Calculate the actual odds — given the state of research — that we will produce a general AI in the near future?
  • Calculate the actual odds that a drunk Russian submariner will trip on the wrong cable, vaporize Miami, and start WWLast?

To keep things moving, we can nominate Nicholas Taleb to be the Secretary of Predicting and Preventing Scary Things.  I also don’t mean to exclude any other extinction-level scenarios. I just don’t know any others off the top of my head.  I’m sure other smart people do.

If the calculated odds seem pretty bad, we shouldn’t second guess ourselves — they probably are bad.  These calculations can help us guide, monitor, or halt the development of technologies like nanotech and general AI, not in retrospect, but before they come to fruition.

Maybe the Copenhagen interpretation is correct, and the present/future isn’t particularly dangerous.  Or maybe we’ve just gotten really lucky.  While I’d love for either of these to put this line of thought to bed, I’m not personally enthused about betting the future on it.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK