pondělí 23. října 2017

Nick Bostrom - Superintelligence

I am just finishing Bostrom's Superintelligence. Although the book seems far fetched in some moments (and if you know me, you realize this probably is a strong understatement), the argumentation stays honest and serious, discussing many possible scenarios of the so-called intelligence explosion

Bostrom essentially claims that:
  • Sooner or later, we will discover artificial intelligence as capable as humans in all relevant skills. 
  • Since humans are probably not at the top of the possible intelligence ladder and since clever machines will accelerate the progress even more, soon after that the AIs will surpass us in many directions. (Including speed: as transistors run on 2 GHs, while neurons around 10 Hz).
  • If we fail to program the right motivation to the AI soon enough, we will inevitably lose control of the situation.

Bostrom names several possibilities how we could try to control the AI, however assumes they are risky in long term, given the AI capabilities. He also spends several chapters elaborating how we should specify the motivation system (if we will ever be able to make solid algorithmic representation for it). He claims, that the AI minds could be extremely counter-intuitive in their motivations. This seems to be very plausible to me. For example, they could suffer from:

Perverse Instantiation: We specify what we want in such way, that the unexpected solution is something we had never in mind. ("Make us happy" can be achieved by placing electrodes into our brains, for instance.)

Infrastructure Profusion: If we specify goal of autonomous action incorrectly, the artificial intelligence (AI) can turn the whole Earth into microprocessors while blindly following the goal to solve Riemann hypothesis, or do similar damage.

Mind crime: If you believe simulated minds can have moral status, there is danger that the AI will harm the simulations (torture them while exploring different scenarios by too detailed simulation, for instance.). 


Later, Bostrom classifies the possible types of AIs we could try to achieve, as "Oracles"that only provide information we require without any motivations of its own, "Djinns" and "Sovereigns" that fulfill tasks one at a time or completely autonomously, and "Instruments", which perform complex tasks without acting as agents. Bostrom then tries to construct (more or less) formal definition of what goal should we try to implement to the initial AI, to avoid the above mentioned failures. (Assuming we find out how.) There is much more, but you will have to read the book. 😉




I really liked the book - it is very intellectually stimulating and after long time, it is one with many new concepts - this is something I really like. I think the outline of the scenarios presented therein is more or less correct, but there is also many cases, where I disagree with Bostrom. It is, for sure, very hard to imagine intelligences that surpass us in the same way our intellect surpasses that of a dog, or even much more. The way Bostrom approaches this is, that he simply vastly overestimates the superintelligent AIs in all directions. One consequence of this strange logic is this sort of "exponential fitting": Before neolithic evolution, it took humanity 220,000 years to double the production, for agrarian society, it was 900 years, and in purely industrial society, it would be 6.3 years+. The intelligence explosion should therefore happen in minutes or max days. (Bostrom also presents slower scenarios, but claims this time scale should not surprise us).

If there would be no hardware needed to develop, and if the AI could design all the technology from first principles, then maybe. But I am very skeptical that technology can be developed without experiment (which takes time). There are very good reasons to think no computer should be able to predict weather beyond several weeks, because the required precision in initial condition grows exponentially with the required prediction time. The agent capable of thinking about not few, but millions of entities at once could out-think us in many ways, but we already know that there are some practical limits even for superintelligences, and we should not blindly extrapolate. It is even possible, there is surprisingly little practical technologies not comprehensible to humans, but available to supreintelligences. They would discover them much faster, but the technologies would not necessary be incomprehensible to us in the same way differential equations or aeroplanes are incomprehensible to dogs.

The same goes for the claims that isolation and direct control cannot work, because alone supercomputer isolated from the outer world with only keyboard input could surely outmaneuver his guards and overtake the world. Bostrom is more careful in his claims, but one can still notice, that arguments from this community sneak somewhere into his assumptions.

Last think I will point out: I do not quite believe there will be reasonable algorithmic way how to encode "final goal" the AI shall follow in the way Bostrom imagines it. (This is the "control problem" that should be solved before we start creating the AI.) All the abstract motivations he constructs are interesting and some of them also well-defined, but it is quite possible the complex intelligences will be achievable only by some self-organizing or self-learning algorithms, like neural networks, which do not allow such solid specifications of so abstract goals. Although I am no neuroscientist, I tend to think that on the lowest level of our brains, there is probably some simple and very solid algorithm of how the neurons should interact, connect and learn in the process. The concepts (people, trees, houses, etc.), on the other hand, are probably derived and although they can be partially built into the brain architecture as well, they are much further from the low level algorithm. It seems hard to imagine that we could encode abstract goals in layer low-level enough it would stay enforced, and would be correctly generalized and maintained by the AIs during the whole intelligence explosion.

Overall, I am very glad that these things are getting into public knowledge, because even if many of them sound very counter-intuitive and improbable, they may be closer than we think.

+ I am now not sure about the numbers, I will soon correct them according the book. These, I took from here, where is only citation.