24

The Simulation Argument and the Simulation Barrier

 4 years ago
source link: https://michaelfeathers.silvrback.com/the-simulation-argument-and-the-simulation-barrier
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

I can’t recall when I first heard Nick Bostrom’s Simulation Argument , but I know that it was a long time ago. It seems to resurface in the popular consciousness every few years — often when it is tied to the plot of a movie, or when a celebrity or entrepreneur makes reference to it.

The core of the argument can be found in the abstract of Bostrom’s original paper [1]:

This paper argues that at least one of the following propositions is true: (1) the human species is very likely to go extinct before reaching a “posthuman” stage; (2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof); (3) we are almost certainly living in a computer simulation. It follows that the belief that there is a significant chance that we will one day become posthumans who run ancestor-simulations is false, unless we are currently living in a simulation. A number of other consequences of this result are also discussed.

The paper goes on to make the argument, and it's solid enough to have been discussed often over the past 15+ years, but I’ve long had a niggling thought that something is missing.

Let me back into it through another concept that Bostrom introduces: substrate independence .

Bostrom gives a good working definition in this paragraph:

A common assumption in the philosophy of mind is that of substrate-independence. The idea is that mental states can supervene on any of a broad class of physical substrates. Provided a system implements the right sort of computational structures and processes, it can be associated with conscious experiences. It is not an essential property of consciousness that it is implemented on carbon-based biological neural networks inside a cranium: silicon-based processors inside a computer could in principle do the trick as well.

This is fair enough. It seems reasonable that mental processes, or phenomena that mimic them closely enough to be the same, could run on any sort of hardware — technical or biological. But, if we are positing that some things (in this case, mental experience and computation), are the same same regardless of substrate, it’s worth asking: do they have to be?

In the next section of the paper, The Technical Limits of Computation , Bostrom appears to be implying that they are:

At our current stage of technological development, we have neither sufficiently powerful hardware nor the requisite software to create conscious minds in computers. But persuasive arguments have been given to the effect that if technological progress continues unabated then these shortcomings will eventually be overcome. 

Bostrom seems to be assuming that the technical limits of computation that he observes would be shared both in the environment that creates a simulation and within the simulation itself. It’s not clear to me that this would be true. In the Western tradition, we have a deep assumption of the universality of mathematical truth. It’s hard, for instance, to imagine self-consistent worlds where various models of deduction we take for granted are broken, but this might be more than failure of the imagination: it could be lack of a broader context.

The classic example of this is Edwin Abbott Abbott’s Flatland [2]. In this philosophical novel, a set of creatures are confined to a plane. They live without any awareness of a third dimension. Their view of many things, including causality, is quite different. In a similar manner, it seems reasonable to allow that simulations, as experienced by their inhabitants, could have entirely different models of math, physics, and even consistency.

This lack of necessary overlap in metaphysics between the world that creates a simulation and the simulation itself could be called the simulation barrier .

I could be missing it, but I haven’t seen any reference to this idea in any of the discussion around the simulation argument. It might be because it doesn’t go any place in particular. But, on the other hand, it may go too many places.

Let’s assume, for a second, that we are living in a simulation. The reasoning that leads us to consider it (Bostrom’s argument) could be based on a local metaphysics — an artifact of the simulation we are in. One line of thought might be that this does not matter. From our local context, Bostrom’s argument is valuable because it leads us to consider that we might be in a simulation. However, once you concede that we might be, you can’t fully rely on his argument to get there. The logic that it is based on could be a local artifact.

Within Bostrom's line of reasoning, the possibility of a Simulation Barrier seems to put a limit on what any local sense of a world can know about what is outside of it.

[1] - Are You Living in a Computer Simulation, Nick Bostrom

Philosophical Quarterly (2003) Vol. 53, No. 211, pp. 243‐255

https://www.simulation-argument.com/simulation.pdf

[2] - Flatland: A Romance of Many Dimensions, Edwin Abbott Abbott

https://gutenberg.org/ebooks/201

About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK