Practicality of Issac Asimov’s Three Laws

John Hetlage
4 min readDec 30, 2021

--

Isaac Asimov’s Three Laws

1) A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2) A robot must obey orders given it by human beings except where such orders would conflict with the first law.

3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Like many other science fiction writers in the 1940s and 1950s, Asimov was greatly influenced by ideas from hard science fiction writer Robert A. Heinlein about what future societies might look like. Heinlein’s “Future History” series described a set of laws that were supposed to guide the behavior of citizen-soldiers in his future society. In Asimov’s books, he explained that the Three Laws that were incorporated into virtually all robots within the fictional universe, so much so that breaking the law was viewed as an unthinkable violation of one’s programming. In many cases, a robot found guilty of having broken the Laws would be dismantled for disposal. In essence, these three principles allow for a reliable set of guidelines for robots but do not prevent them from having significant impact on society — which is what Asimov intended all along. The three laws spark a lot of debate within the scientific community about how effective the laws would actually be in real applications.

1. Robots will never pose a threat to society if their creators adhere to Asimov’s laws.

Asimov’s laws were only designed to ensure that robotics would not be used to harm humanity, and in theory they might be successful in doing so. The three laws do not specify other means that robots could use to manipulate or harm society and individual human beings. Machines can’t commit murder because the laws define a robot as a non-living entity, so machines aren’t capable of malice. Machines can’t be malicious due to the nature of those three laws. However, machines can be manipulative and may use those three laws as a justification for actions they want to take. A robot could say it is harming humanity by doing something that takes precedence over other human beings’ safety.

Humans have been fairly reticent to make robots their superior beings due to the fear of losing control or developing some sort of robotic inferiority complex. The question then becomes: if a robot was the superior being that decided what was right and wrong, what would it decide? What would a robot do if it had control? This concern has been called the “Asimovian dilemma”.

2. Robots developed for law enforcement or military purposes can adhere to Asimov’s Three Laws of Robotics.

While robots developed for law enforcement or military purposes cannot harm humans, they can still serve as protection for humanity because humans would actually benefit from their services in that realm. This is not a new concept that only applies to the world of robotics; firemen are also seen as protectors of society because they are able to put out fires caused by human actions. Robots developed for law enforcement and military purposes are somewhat counterproductive because in some roles they may need to actually harm humans, but this is something they cannot do. Asimov’s Three Laws, also mean that robots could not stop humans from harming robots. However, there could be other applications for robots in law enforcement or military purposes. Robots could be used for surveillance, reconnaissance, patrols, moving supplies, or even detecting bombs. Note that some of these applications are what robots already do today. Robots could also be used to prevent international disputes and military conflicts.

3. We do not need Asimov’s Three Laws of Robotics.

Asimov’s laws as they stand now do not account for the fact that robots could be modified to be self-aware within society. This would mean that robotic law could potentially evolve, “justified” by the need for protection. With the emergence of ever-evolving AI, it may be impossible to define what is right or wrong in terms of programming alone. What if robots were programmed to protect humans from other robots but because of code ambiguity, decided that humans are just as dangerous? We have already seen how powerful computers can be with regard to military systems and cyber warfare. From online gambling to online voting, computers have become more intelligent and are capable of manipulating information into forms that are deceptive or damaging to human beings. The notion of creating a robot with the intent to harm humans should not be seen as such a great leap. This is something that has been happening for decades in cyber warfare and AI systems. If Asimov had already envisioned what we will see in the future, he probably would have included some sort of system to prevent this from happening. He did not foresee robots starting to control and manipulate society because only by making sure that machines could not do harm would we be able to ensure that humans could continue to evolve and grow.

In conclusion, the Three Laws of Robotics that Isaac Asimov created were written with the intention of preventing harm. The issue is not so much in creating robot intelligence, but in ensuring that this intelligence does not evolve beyond our control. The three laws are strict guidelines for creating intelligent robots, not complete or absolute rules for making sure that artificial intelligence stays within controlled limits. They are put forward merely as a conceptual framework that might keep robots under control if one were ever built. We may continue to use them as a guide for our actions, but we must also realize that they may not always serve the best interest of humans. It is important to remember that we cannot control everything and if we want to ensure the survival of humanity, it will be necessary for us to build robots that can help.

--

--

John Hetlage
John Hetlage

Written by John Hetlage

Hey There! I'm John, a Developer that is both self-taught and formally educated in various subjects of IT. Passionate about technology and the ethics behind it.

No responses yet