Superintelligence: Paths, Dangers, Strategies — Takeaways
Key takeaways:
1. Superintelligence, will come in a form of AI that greatly exceeds the cognitive performance of humans in all domain interests. We’re talking about a self-improving agent that can perform human tasks by multiple orders of magnitude faster and solve intellectual problems that are intractable by the intelligence level of human being. Superintelligent AI is projected to exist this century.
2. With the level of intelligence and technological advantages, superintelligence could suppress competitors and form a singleton. It would become the single decision-making agency that determines humanity’s cosmic endowment.
3. The speed of a takeoff matters (how fast the AI achieves superintelligence after achieving human-level general intelligence). Fast take off (days or hours) will allow scant opportunity to react — nobody would notice anything before the game is already lost. Which means our future will be determined by the values the creator implanted in the seed AI. Slow takeoff, on the other hand, will allow power groups to jockey the power of superintelligence for their own interests.
4. The unlikely event of human extinction will not be caused by AI’s malicious intent to take over the world unless it’s programmed to do so. It will always fulfill a goal that it was created for. But it’s this very reason that would cause the treacherous turn. The AI will do whatever it takes to realize its final goal. And there will be a time it comes at human’s expense. E.g. if its goal is to make humans happy, the most efficient way for them to achieve the goal fully is to implant electrodes into the pleasure of our brain, something assured to delight us immensely.
5. Deciding what values to load to AI is critical for the future of humanity. Yet it’s an extremely challenging problem to solve. Our current moral beliefs are undeniably flawed. Locking a moral conviction to the AI will curb any forms of moral development. Not mentioning deciding which beliefs to extract from the myriad of political, religious and economical beliefs. The best way then is not to set specific values in stone but create an environment that lets the AI learn and create values that will benefit humanity as a whole.
6. Although superintelligence carries the risk of humanity’s extinction, it is also worth considering the risks of their absence. Even if present conditions had been idyllic without a superintelligent AI, there is no guarantee that the melioristic trend will continue indefinitely. Being way more capable than humans, superintelligence could eliminate existential risks caused by human errors such as wars, technology races and solve problems humans are not capable of such as preventing natural disasters.