refutations of two Oxford philosophers

Recently I came across two treatises by Oxford philosophers belonging to the transhumanist milieu (e.g. Humanity+, Future of Humanity Institute). Both works are interesting in the incorporation of higher-level scientific and technological arguments into a field that, with the passing of time, increasingly sits in general neglect at the lowest level of thought much below which scientific inquiry and technological progress now take place. Transhumanism, whatever its actual content, at least updates philosophy to be relevant in the modern context.

The works, one by David Pearce on the Hedonistic Imperative and one by Nick Bostrom on the Simulation Argument are quite provocative and engaging to read. They will be briefly summarized and refuted in the sequel.

1. On the “Hedonistic Imperative”:
Summary. The argument is that all suffering (physical and psychological) are the result of evolutionary optimization directed for Darwinian fitness, which Pearce presumes to be superfluous in an advanced technological post-Darwinian world in which we can exert control over our environment; so if there is technology to eradicate the neurological basis of suffering and redirect inputs toward varying levels of happiness, it should be done for all sentient life.

Pearce likens the likely future perspective on suffering to the present view on physical pain during surgeries — as an unnecessary legacy that can and should be eradicated. In the same process, happiness can be enhanced and higher levels of the same can be explored.

Refutation. The treatise lists a number of objections on the ethics of large-scale pro-social “control,” as it were, that must be put in place for a happiness regime to be achieved in the first place, and then to stably last. Pearce argues that this is in the interest of everyone and conservative pro-suffering sabotage would be stamped out much like lawful society stamps out crime. He gives a number of fairly good analogies on this, so despite my perception that there is something facile in this, I do not wish to argue against this implementation detail. Instead there is something more basically wrong.

The entire argument of the work, I believe, rests on the assumption that suffering and happiness are on a continual axis which was simply wrongly biased and can be arbitrarily shifted upwards to eliminate negative experiences. This supposes that negative feedback mechanisms are unnecessary in a post-Darwinian world and that a hedonic gradient, even if entirely positively biased, can drive motivation and progress (this is true). But what is more (and the author does not appear to address this), it implies that in a post-Darwinian world, there should exist no rational aversive motivation that requires negative feedback to modulate — everything should be wanted or wanted more. How can this be, unless the world itself (not just human experience) is entirely engineered against all danger, and static in the sense of being without the need to adapt to its changes? The analogy to the earlier example would be to eliminate all sensation of physical pain, not only during surgeries, but for all time. This is only feasible if the world has no, and will never have, physical dangers.

There is a core difference between positive and negative feedback, and it is exactly the aversive nature of suffering that demarcates this difference. A lesser degree of happiness will be less preferred than a stronger degree of happiness, certainly, but it is still an attractive experience, hence it will be taken if presented unless an opportunity cost stands in the way — but opportunity costs do not exist in Pearce’s post-Darwinian world of infinite life and easily obtainable happiness of any dosage. Even if they did exist, it would still require prescience and global planning on the part of the actor, as causal goal optimization can only be greedy and local. A quasi-static world without negative feedback doesn’t strongly converge to anything, and even if we can reshape all sentient beings in our world, we will be out-competed and replaced by Darwinian beings from another world.

2. On the “Simulation Argument”:
Summary. This argument says that if ours is a typical world and there may be others like it, then if a non-negligible proportion of such typical worlds obtain the post-human technology to run and do run an astronomical number of full-scale typical-world (“ancestral”) simulations, then an arbitrary typical world such as ours is almost surely a simulated world. (Bostrom presents a more involved trilemma, but it’s of no substantial additional interest.)

Bostrom’s argument is based on his “bland indifference principle,” which says that in the absence of strong evidence to the contrary, we can only assume we are drawn with uniform prior from the population of all worlds, most of which are simulated if massive simulations are commonplace.

Refutation. There are some extant criticisms of this work on the grounds of probability, but due to the basic simplicity of Bostrom’s argument, they do not stand. This one, for example, argues that we should place a prior on the existence of a world outside ours, but this is exactly equivalent to the probability that we are in somebody’s simulation, a prior we are unable to ascertain, and the subject of Bostrom’s indifference principle.

There are again a number of objections on the feasibility of running sentient AI with experiences similar to ours and the such, but the core problem does not lie here. It lies within the deeper reaches of substrate independence.

At the core, Bostrom’s is an argument of human limitations. If humans are capable of replicating the world and becoming god-like, then there is no reason to believe even for rational reasons that we’re not one of some god’s experiments. This seems prima facie preposterous, but perhaps a more humble belief is that we will not have such capabilities for very basic reasons. It is more likely that technological capabilities built out of the physical fabric will not be able to fully replicate the fabric of full physics, that there is some fundamental information law that prevents such replication, just like a virtual machine in full-emulation mode, even though it is approximately Turing-complete, is not truly Turing-complete owing to the limited information storage (the tape) by which the emulated will always be inferior to the emulator.

In this light, substrate independence is not a weak and easy assumption. It is an incredibly strong assumption. It is the core of the argument that ensures the simulation chain does not degrade, yet all our current evidence points to a necessarily degrading simulation chain. Since we observe no obvious informational limits to our universe, it is more reasonable to take this as evidence that we are not degraded by simulation (up to our knowledge) rather than of substrate independence. As our physics advances and our limitations born of ignorance fall further away, we should be less and less likely to believe that we are simulated as we position ourselves higher up the chain toward the putative “real” substrate. Thus the more plausible consequence of the simulation argument is not the revelation of a rationalized Creationism in which we must choose between future ignorance or present subservience, but of a progressive convergence to god-human/real-simulated equivalence. That is to say, as we know more and become more capable, the distinction between the real and the simulated falls away, and simulation agnosticism becomes no longer meaningful at all. All “real” computation is ultimately carried out by the universe via physics, which may as well be considered as a simulation of itself. Occam’s razor then takes care of the superfluous appeal to the “true real,” just like it always has with regard to the question of a creator.

No comments yet. Be the first.

Leave a reply