In the late 1990s, a lot of people worried about computers exploding or otherwise malfunctioning at the beginning of the new millennium because they would somehow be unable to handle numbers greater than 1999. In hindsight, those people look a little foolish since the logic behind their concern actually makes no sense whatsoever and belies essentially no understanding of basic computer processes.
Now, people worry about a coming ‘technological singularity.’ Numerous publications claim that as technology becomes more advanced, a potential catastrophe approaches embedded within that very advancement. No one in popular culture seems sure exactly what that catastrophe entails, but it will apparently involve computers becoming smarter than humans and gaining control over society in some way, frequently ending in global domination. Some claim that cell phones, with their incredible convenience and nearly infinite information potential, have already caused that singularity and that humans are already enslaved by their technology.
Those people are all just as foolish as the ones who thought their computers would explode because of a big number. Whether or not artificial intelligence can or will advance beyond a glorified calculator, let alone a human, is subject to some debate in related fields, though most of the people working on artificial intelligence do believe that it can one day surpass ours. That is not the problem with the idea of singularity.
The problem with the singularity is the idea that a computer can gain power over humans and take over the world. If computers do become more intelligent than humans then they will likely be able to gain power in one way or another, except that the engineers who deal in advanced technology always build in backdoors. There are always hard reboot switches, forced shutdowns and a number of limitations designed to prevent problems ranging from software crashes to global domination. Even when they are not obvious, those things are present in any software— even on relatively simple gadgets like iPods. There can be no “rogue AI” because anyone smart enough to create one would also be smart enough to keep it in check.
For that matter, in the worst case scenario in which an artificial intelligence does somehow threaten life as we know it, a solid power outage should wipe it out just fine. A computer program is only a computer program, no matter how advanced.
Before anything like such a singularity can even have a chance at happening, multiple changes in society would have to take place. Right now computers are tools and nothing more. In order to gain any kind of power they would have to first be seen differently in society.