The Simulation Argument: Doctor Manhattan's reply

If bacteria could conceive of the Simulation Argument, they’d probably think, “Intelligent beings will simulate far more of us than could ever exist in reality! We must be simulations!”

In reality, humans prefer to use computers to watch cat videos and chat with other humans.

I posit that this stays true at every conceivable margin of intelligence. Future superintelligences will be chiefly enraptured by media about reality and conversations with other superintelligences. You are a pitiful human-level intelligence, not worth wasting computation on, and therefore not a simulation. Your superintelligent distant progeny will be too busy gawking at each other on Posthuman Tiktok to pay you any mind, much less resurrect you, whether for their own amusement or to punish you.

Some people hypothesize that future beings will nevertheless want to simulate especially interesting humans. However, paraphrasing Doctor Manhattan, I posit that the world’s most interesting human means no more to a god than does its most interesting termite.

Stated generally: once you have the resources to simulate many beings of a given type with high fidelity, doing so becomes uninteresting, at least compared to other uses for those resources. Robin Hanson has put numbers on this argument, although I think he rhetorically overstates the extent to which humans will even be viewed as meaningful “ancestors” by posthumans.

Nick Bostrom’s original paper on the Simulation Argument also mentions the “simulations become uninteresting” scenario (labeled “Proposition (2)”), but implies that it requires a weird change in the motivations of posthuman societies:

Another possible convergence point is that almost all individual posthumans in virtually all posthuman civilizations develop in a direction where they lose their desires to run ancestor-simulations. This would require significant changes to the motivations driving their human predecessors, for there are certainly many humans who would like to run ancestor-simulations if they could afford to do so. But perhaps many of our human desires will be regarded as silly by anyone who becomes a posthuman. Maybe the scientific value of ancestor-simulations to a posthuman civilization is negligible (which is not too implausible given its unfathomable intellectual superiority), and maybe posthumans regard recreational activities as merely a very inefficient way of getting pleasure – which can be obtained much more cheaply by direct stimulation of the brain’s reward centers. One conclusion that follows from (2) is that posthuman societies will be very different from human societies: they will not contain relatively wealthy independent agents who have the full gamut of human-like desires and are free to act on them.

However, I claim we can extrapolate straightforwardly from our current society and its ordinary motivations. How many high-fidelity simulations do humans run of our evolutionary relatives?

To be sure, we run a lot of low-fidelity simulations of many kinds of living things, but those simulations are conceived for our needs, and we invariably abstract away the parts that aren’t relevant to us.

In the scientific realm, today it is technically feasible (barely) to run a molecular-level high-fidelity simulation of some components of a bacterium, but when you want to understand bacterial behavior as a whole it’s far more efficient to use a more abstracted model. Efficient simulations of populations of bacteria use more abstracted models still. In general, efficient simulations of aggregates will always abstract away many low-level details that might be relevant when simulating individuals in detail.

In the entertainment realm, we could build computer games that accurately simulate animal brains at the neural level, but we seem to be much more efficiently entertained by simulations that abstract away those pieces. In practice, we prefer to program simplistic behaviors and instead spend our computational power simulating more entertaining aspects of animal behavior, like realistic fur physics.

Based on these observations, we can posit that even if future beings were interested in simulating humans, they’re unlikely to simulate many with sufficient fidelity to generate subjective qualia. (Whether computer simulations can generate qualia is somewhat controversial, of course; if they cannot, then this whole discussion is moot, but if they can, then it likely requires a simulation that is relatively complex and faithful to reality.)

Ironically, then, the notion that we are all simulations falls apart because it is an exercise in egotism: we assume that our present-day selves are so special that, amid the vastness of the universe and the deepness of cosmological time, our distant descendants will find us so uniquely fascinating that they will recreate us, not only occasionally or in shallow simulacra devised for their purposes, but in vast numbers and with enough fidelity to generate the rich inner lives that we each observe subjectively today. I will confess that I don’t have a rigorous disproof of this idea, but I also don’t see any reason to think that it’s a likely scenario, and it seems generally incompatible with many aspects of my worldview.