Hitchhikers Guide to the Multiverse

April 28th, 2003 | Posted by paul in Uncategorized - (Comments Off on Hitchhikers Guide to the Multiverse)

 

I’ve been thinking a lot about Max Tegmarks ideas on the multiverse and some of its more profound implications. This one occurred to me the other day:

According to Tegmark an infinite Level 1 multiverse is limited to the very large 2 to the 10120 possible states within a given Hubble volume before it repeats. Since there are an infinite number of Hubble volumes, then at least one of these volumes should exist with the maximum possible intelligence in a given light cone. The same can be said about the 2 to the 10120 possible states of a given Hubble volume at Level 3.

Would this not be equivalent to a quantum computer with 2 to the 10120 qubits, whose program’s output is essentially the answer to life, the universe and everything? Is this not similar to the Hithchikers Guide to the Galaxy, except in this case the Level 1 Universe is the computer and whatever maximum intelligence it achieves is the answer? Thinking of it another way, we are talking about the maximum amount of intelligence possible within a level 1 multiverse, which by definition is equivalent to the maximum possible apotheosis within a Level 1 parameter set.

Following this further, since Level 1 is infinite and there are an infinite number of repetitions of every 10 to the 10120 states, then that also means that every ‘you’, even all the ones who experience every imaginable degree of suffering in other universes, will ultimately find themselves in a Level 1 maximum heavenly apotheosis. Because of the nature of our given physical constants at Level 1 we seem destined again for space-time rupturing by eternal chaotic inflation. So the question remains will the maximum apotheosis of Level 1 figure a way to outlive the end of Level 1 space-time? I’m hedging my bets that they will, as we are talking about an infinite amount of intelligence devoted to the problem.

Either way, I suspect the end result is an infinite numbers of universes achieving their own unique maximum degree of intelligence and blissful apotheosis, transcending the limits of Levels 1,2,3, and eventually 4 – resulting in all of us growing and evolving into higher forms of intelligence, compassion and wisdom without end.

facebooktwittergoogle_plusredditpinterestlinkedinmail

The Singularity and The Apotheosis

February 7th, 2000 | Posted by paul in Uncategorized - (Comments Off on The Singularity and The Apotheosis)

By Eliezer Yudkowsky

The Singularity holds out the possibility of winning the Grand Prize, the true Utopia, the best-of-all-possible-worlds – not just freedom from pain and stress or a sterile round of endless physical pleasures, but the prospect of endless growth for every human being – growth in mind, in intelligence, in strength of personality; life without bound, without end; experiencing everything we’ve dreamed of experiencing, becoming everything we’ve ever dreamed of being; not for a billion years, or ten-to-the-billionth years, but forever… or perhaps embarking together on some still greater adventure of which we cannot even conceive… that’s the Apotheosis.

We accept the possibility that this future may be unattainable; there are many visualizations under which Apotheosis is impossible. Probably the most common category is where the superintelligences have no particular reason to be fond of humanity – all superintelligences inevitably come to serve certain goals, and we don’t have any intrinsic meaning under whatever goals superintelligences serve, or we’re not sufficiently optimized – so we get broken up into spare atoms. Perhaps, in such a case, the superintelligences are right and we are wrong – by hypothesis, if we were enhanced to the point where we understood the issues, we would agree and commit suicide.

There was a point where I was sure that superintelligent meant super-ethical (probably true), and that this ethicality could be interpreted in anthropomorphic ways, i.e. as kindness and love (unknown). Now, with the invention of Friendly AI, things have gotten a bit more complicated. Apotheosis is definitely a possibility. I refuse to hope for an Apotheosis that contravenes the ultimate good, but I can hope that the ultimate good turns out to be an Apotheosis – and if there is no “ultimate good”, no truly objective formulation of morality, then Apotheosis is definitely the meaning that I’d currently choose. So I hope that all of us are on board with the possibility of an Apotheosis, even if it’s not necessarily the first priority of every Singularitarian.

The Principle of Apotheosis covers both the transhumanist and altruist reasons to be a Singularitarian. I hope that, even among the most philosophically selfish of transhumanists, the prospect of upgrading everyone else to godhood sounds like at least as much fun as being a god. There are varying opinions about how much fun we’re having on this planet, but I think we can all agree that we’re not having as much fun as we should.

Even after multiple doses of future shock, and all the other fun things that being a Singularitarian has enabled me to do to my personality, I still like to think of myself as being on track to heal this planet – solving, quite literally, all the problems of the world. That’s how I got into this in the first place. Every day, 150,000 humans die, and most of the survivors live lives of quiet desperation. We’re told not to think about it; we’re told that if we acknowledge it our minds will be crushed. (11). I, at least, can accept the reality of child abuse, cruelty, death, despair, illiteracy, injustice, old age, pain, poverty, stupidity, terror, torture, tyranny and any other ugliness you care to name, because I’m working to stop it. All of it. Permanently.

It’s not a promise. It can never be a promise. But I wish all the unhappy people of the world could know that, whatever their private torment, there’s still hope. Someone, somewhere, is working to stop it. I’m working to stop it. There are a lot of evil things in the world, and powerful forces that produce them – Murphy’s Law, blind hate, negative-sum selfishness. But there are also healers. There are, not forces, but minds who choose to oppose the ugliness. So far, maybe, we haven’t had the knowledge or the power to win – but we will have that knowledge and that power. There are greater forces than the ugliness in the world; ultratechnologies that could crush Murphy’s Law or human stupidity like an eggshell. I can’t show an abused child evidence that there are powerful forces for good in the world, forces that care – but we care, and we’re working to create the power. And while that’s true, there’s hope.

There is no evil I have to accept because “there’s nothing I can do about it”. There is no abused child, no oppressed peasant, no starving beggar, no crack-addicted infant, no cancer patient, literally no one that I cannot look squarely in the eye. I live a life free of “the normal background-noise type of guilt that comes from just being alive in Western civilization”, to paraphrase Douglas Adams (12). It’s a nice feeling. All you have to do is try to help save the world.

Related Links:

The Singularity Institute for Artificial Intelligence

The Singularity Principles

Singularity Watch

facebooktwittergoogle_plusredditpinterestlinkedinmail