The disagreement over “if robots would overtake humans” has recently been heated up by warnings against the possible danger of the unregulated development of robots from some academic or industrial superstars. However, what’s obviously missing in those warnings is a very clear description of any realistic scenario by which robots can assuredly challenge humans as a whole, not as puppets programmed and controlled by people, but as autonomous powers acting on their own “will”. If this type of situations would not be realistic then although we might possibly see robots be utilized as ruthless killing machines in near future by terrorists, dictators and warlords as warned by the elite scientists and experts , we may still not worry overly much about the so-called demonic threat of robots as warned by some elite experts as it is just one more kind of human hazard in the long run. But if the type of scenarios mentioned above could foreseeably be accomplished in the actual world, then humans do need to start worrying about how to prevent the peril from happening instead of the way to win disagreements over imaginary dangers.
The reason that people on both sides of the argument could not see or show a very clear situation that robots could really challenge people in a really realistic way is really a philosophical issue. So far all discussions on the issue have focused on the possibility of creating a robot that could be considered as a human in the sense that it might indeed think as a human instead of being solely a tool of people operated with programmed instructions. According to this line of thought, it appears that we don’t have to fret about the threat of robots to our individual species as a whole because nobody may yet provide any plausible reason that it is likely to produce this type of robot.
Unfortunately, this method of thinking is philosophically incorrect because people that are thinking in this way are missing a fundamental point about our own human nature: human beings are social animals.
An important reason that we can endure as that which we are now and could do what we’re doing today is that we are living and behaving as a societal community. Likewise, when we estimate the capacity of robots we shouldn’t only concentrate our attention on their individual intelligence (which of course is so far infused by people), but also needs to take into account their own sociability (which of course could be originally created by people).
This could further result in a different philosophical question: what would basically determine the sociability of robots? There could be a wide range of arguments with this query. But in term of being able to challenge people I would assert that the basic sociable standards for robots could be defined as follows:
1) Robots can communicate with every other;
2) Robots could help each other to recover from damage or shutdown through essential operations such as modifications of batteries or replenishment of Different forms of energy supply;
3) Robots could execute the manufacture of different robots from researching, collecting, transporting, and processing raw materials to assembling the final robots.
Once robots could have the above functionalities and start to “dwell” together as a mutually dependent multitude, we ought to reasonably view them as sociable beings. After robots could function as defined above and form a neighborhood they would no longer need to live as slaves of their human masters. After that happens it could be the beginning of a history that robots may possibly challenge people or begin their cause of carrying over people.
Since not all of the functionalities mentioned above exist (at least publicly) in this world now, to avoid any unnecessary debate, it would be sensible to create our decision based upon if any known scientific principle would be violated in any sensible effort to comprehend some particular functionality among those mentioned above. Therefore, even though we may not have a single robot or a group of single robots possess all of the functionalities mentioned above, there is not any basic reason for some of those functionalities mentioned above to be deemed as not producible based on any known scientific principle, the one thing left to do is to integrate those functionalities together onto a single entire robot (and thus a group of only robots).
Since we do not find any famous scientific principle which would prevent any of these functionalities from being accomplished, we must reasonably expect that using money to be invested and with time to be spent the creation of sociable robots as defined earlier could foreseeably become real unless some special efforts to be made by humans on this planet to prevent this from happening.