The Three Laws Of Robotics
The Three Laws Of Robotics were envisioned by Isaac Asimov.
They were his founding principles of robot AI designed to protect humans, and the robots themselves.
- The First Law is that a robot cannot through action or inaction harm or allow to come to harm any human being.
- The second law is that a robot must follow orders given to it by a human, except where this contradicts the first law.
- The third law is that a robot must aim to preserve its own existence, except where this conflicts with the first or the second law.
These laws were built into the very equations and circuitry that the AI ran upon - so in most cases the AI had no way to circumvent them.
While being a great founding principle - they were not infallible.
There is one story where a robot must choose between a human holding a gun and a human the gun was aiming at. If the robot didn’t act - the other human would surely die - so it must harm the human with the gun.
Another very famous tale is the Bicentennial Man where a robot is ordered by humans to take itself apart.
Anyone interested in the psychological and other aspects of the interplay of these laws and humans should read I Robot, Robot Visions, Robot Dreams and other such works by Isaac Asimov.