Intuitively, a superintelligence is an entity vastly superior to humans' individual and collective intelligence at solving information processing tasks.
Because of the hardness of alignment and the ubiquity of undesired side effects, a catastrophic future is often argued to be a default scenario, in case of the emergence of a superintelligence. This is why researchers concerned about existential risks often call for a massive investment in AI safety and AI ethics.
Strong versus weak AI
A distinction is often made between strong and weak artificial intelligence. However, PrunklWhittleston-20 argues that such a binary categorization is neither descriptive nor useful. Arguably, algorithms simply become more and more performant at their respective tasks.
Human-level AI is often defined as an algorithm capable to solve any task that a human can solve, with less time and at smaller cost Bostrom-16 GraceSDZE-17. It is however unclear how relevant this notion is to apprehend AI risks ElmhamdiHoang-19FR.
ElmhamdiHoang-19FR also argues that the Youtube recommendation algorithm is already vastly superhuman at its task, partly because of its scale. It is indeed noteworthy that this algorithm is reviewing 500 hours of new videos per minute, and monitors the daily activities of billions of humans.
General versus narrow AI
Another common distinction made is between artificial general intelligence (AGI) and "narrow AI".
Arguably, AGI is not a capability, but rather a framework. AGI models, like AIXI, typically tackle reinforcement learning in a complex interactive environment. It is noteworthy that many algorithms, especially recommendation algorithms, already run in such an environment at very large scale, and that their environment is extremely complex.
By opposition, "narrow AI" may refer to specific tasks, such as supervised learning.