Superintelligence: Danger to Humanity?
Artificial Intelligence is already part of our daily life. The point of the debate on AI is that if you assume current trends into the future, a moment will come in which AI surpasses the intelligence of human beings in every way. It could take quite a few decades, but some say that it would be the last invention done by man because from that moment on there are robots, or whatever they are, that do better. This event is also known as the technological singularity.
These robots would not be confined to a biology arissen through evolution but built with optimal designs and materials for intelligence and might also improve themselves, going through a development that would render us human beings useless. There is a danger that mankind could go extinct because this new “species” could see us as enemies or simply waste. We would likely be lucky to be kept alive out of some sort of respect for living beings, or a need for conservation.
The paradox is that, on the other hand, we probably can’t afford to be without AI to solve the many problems we are faced with now and in the future. These may include problems in the fields of climate, economy, cosmology, space travel and, indeed, AI itself. It is therefore necessary that people already start addressing the question of how we can lead the development of AI in the right direction so that humanity goes forward instead of digging its own grave.
This can’t be left to the technicians and engineers themselves, because we’re now dealing with purely human aspects that are more in the area of ethics and philosophy as well. One could of course try to impose restrictions on AI, both on the hardware or on the software by setting some kind of smart rules, but we must assume that a super-AI would always find a way to escape those restrictions.
Therefore something will have to be developed that has inherent characteristics that prevent it from behaving incorrectly and destroying mankind, whether intentionally or not. It would have to be something with a kind of integrated respect to other forms of life in general, and humans in particular. The question then is why for us, at least for most of us, it is normal to have that respect.
The answer might lay in the fact that each individual is unique. Everyone, even identical twins, is born with his own set of genes and then goes through his own development. It is not (yet) possible to make a backup or copy to replace someone, so we find that no one has the right to kill another. Of course this doesn’t mean it never happens, and there may be lots of other reasons why causing harm to others is bad, but the basic principle here is that every person is unique.
One could therefore imagine that a super-AI or robot that, instead of being produced in random amounts, had unique characteristics and its own development as an individual, would be considered as a unique “person” with its own “life.” Most likely a sociopath who only cares for his own purposes but still, you may have to think in that direction if you want to build an AI with morals and ethics that are not implemented, and therefore prone to errors and abuse, but really emerge from the thing by themselves.
Perhaps a more positive way to look at the development of AI is by saying that in about 40 or 100 years AI may be so well-developed that all the problems humanity is now struggling with are solved. There is no longer disease or shortage, everything you need is there and you’re practically immortal. This sounds nice but could bring its own problems. What’s the meaning of life if you never die, or at least a copy of you could always continue in a new body?
But isn’t that what humanity has always jearned for: to conquer death and live forever? This wish though, may very well prove to be the ultimate death sentence. Not nuclear weapons or artificial intelligence, but the shortsighted quest for immortality might ultimately be humanity’s destruction. This could even be the solution for the Fermi-paradox: Why aren’t there any aliens around here to be seen? Well, that’s because they’ve all become immortal. They might exist, but they’re not alive, at least not the way we see it. Either that, or they’re just as stuck in a corner of their galaxy as we are, unable to travel the vast distances in space and time.
Life could be viewed as a system that takes energy from its environment and uses it to, at least temporarily, stop entropy from doing its work. Entropy is often called “the arrow of time” and that’s what happens: living takes time (and money). One thing about death is that once you’re dead, you can’t die anymore. Which means that you’re now immortal indeed. Turn that around and the conclusion is: immortality is just another word for death, and to wish for it is the height of folly.
That’s because true immortality (in the sense of really living infinitely, unable to ever die, as opposed to just having a very long life) comes with the logical consequence that time no longer plays any part. In fact, time would not exist anymore, since there is an infinite amount available, which means that anything you could ever experience in your “life” happens an infinite number of times. In that regard, the universe should be completely crammed with immortals by now.
But what happens when our universe comes to its end, wether it’s in a Big Crunch, a Big Bang or whatever? Surely even immortality isn’t going to protect anyone from that? If you take the word literally, at least, then yes it should. Immortal is immortal, independent of any universe or galaxy. “The Last Question,” to say it with Isaac Asimov, would then be: How on earth should we imagine such an entity? So don’t be afraid of death but be grateful for it… it’s the reason you live!
(2015)
