Artificial Super Intelligence: The Bounties, and the Perils

2016 January, 16

The Singularity. The day everything will change. Utopia. Armageddon. A controversial point in human time indeed. Artificial Super Intelligence (ASI) promises great change, but there is a lot of controversy over what that means, and if we should even allow it.

Artificial Intelligence has been around for quite some time. It has proved that it can do certain tasks very effectively, such as flying a plane. It has also proved that it can, like Eurisko, develop rules of thumb that let it adapt to changes in large numbers of variables.

But however impressive, this is normal AI. It has a particular role, and while it is allowed flexibility, it cannot move outside its role, or make changes outside certain parameters. The complex algorithms that make up SpaceX's landing AI can adjust thrust, but not cabin pressure. Eurisko can adapt to changes to the rules of its warfare simulation, but can't invent new weapons or hold a basic conversation.

ASI, on the other hand, removes many of these restrictions. Nick Bostrom, a futurist of the University of Oxford, defines ASI as "an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills."

The examples of AI listed above are by no means close to ASI. They are dumb programs following a set of procedures and excelling in a specialized and well-defined field. But that is not reason to assume they will stay there. The most commonly cited form of ASI is one that models human cognition and is able to learn, adapt, and act exponentially faster through the use of computer styled algorithms, computing speeds, and self-improvement.

There is debate over if this form of ASI is even possible. How can we model the brain when we don't even fully understand it? Despite this, we are making headway, so let us assume for a moment that ASI is possible, and that we will, eventually, be able to integrate it into our society. The question then becomes: Should we?

First, the potential bounties. ASI promises to transform civilization. An artificial mind, given enough processing power and control over different systems, would be able to take in vast amounts of data, analyze them in many ways, and draw lightning fast conclusions. This could lead to specialized healthcare, more efficient manufacturing, and ingenious solutions to geopolitical conflicts. It would mean faster and better informed decisions. After all, with the terabytes of data being constantly collected, no team of humans could ever hope to understand all of it. Better to let a machine do the work for us.

Not everyone agrees. Some argue that, unlike past innovations, ASI poses real perils; some would even argue that, like nano-bots, it poses an existential threat to humanity. Those in the Armageddon field fear that ASI, particularly if they become integrated to manage even part of our economy and digitally connected society, could harm humans. A few fear Robot Rebellion, but the more serious point to poorly designed ASI posing a risk. If we put a machine that thinks differently than us in charge of decisions, what if the way it prioritizes health and freedom is different from us. Could its idea of a 'long, safe, happy life' become quickly morphed, as it constantly updates itself, into sticking us in sterilized tubes and exciting our feel-good organs? Perhaps more worryingly, any ASI by default will lack oversight; when something makes trillions of decisions per second, no one will be able to observe and criticize even a fraction of the decisions it makes for us.

This is perhaps the most solid argument against integrating ASI into our society. Even if it can make decisions based on more information, which it certainly can, and even if its decisions are in the long run better for us, which we won't know unless we try, should we truly outsource our decision-making to a non-human?

It is important to keep in mind that ASI is separate from current day AI. Even if we reject ASI entirely, we can still be content in making and controlling hundreds of specialized AI to fly our planes, drive our cars, schedule our appointments and perhaps one day file our taxes for us. ASI is a much higher stakes game.

The potential benefits of ASI are huge, and its potential risks frightening. How ASI would work a nd be managed in practice remains disputed. The debate on whether it should be allowed, although it has been going on for decades, has barely moved outside the fields of academia. This is a decision that involves us all.