The concept of artificial intelligence over smarting humans has been discussed for many decades and scientists have just given their precious opinion on whether we’d be able to regulate a high-level computer super-intelligence. The answer is almost no.
Managing a super-intelligence far beyond the scope of human comprehension asks for a simulation of that super-intelligence which we can analyze. But if it’s incomprehensible, then it’s impossible to create such a simulation.
Rules like ’cause no harm to humans’ can’t be set if we can’t comprehend the type of scenarios that an AI can offer. If a computer system is functioning at a level above the scope of our programmers, we can’t set limits.
Remember, superintelligence is multi-faceted so it is potentially capable of mobilizing a wide range of resources for achieving objectives that are possibly incomprehensible for humans. So how could they be managed?
We know that for certain specific programs, it’s quite impractical to find a solution that allows us to understand that for every possible program that could ever be written. It brings us back to AI, which is a super-intelligent state and can literally hold every possible computer program in its memory at once.
Any program written to deter AI from harming humans and destroying the world could reach a conclusion or not, it’s mathematically impossible for us to be completely sure either way, which means it isn’t containable.
The alternative to training AI some ethics and prompting it not to destroy the world, something that no algorithm can be totally sure of achieving, is to restrict the capabilities of the super-intelligence. It may be cut off from parts of the internet or from certain networks, as a result.
So the conclusion is that if we can’t use it to solve problems beyond the scope of humans, then why create it at all?
If we are still keen to pursue artificial intelligence, we may not even realize when a super-intelligence beyond our control arrives, such is its incomprehensibility.