November 3, 2021 ENKI Team
In 1942, science fiction writer Isaac Asimov introduced The Three Laws of Robotics (also known as The Three Laws) in his short story “Runaround.”

 

The Three Laws proceed as follows:

First Law

A robot may not injure a human being or, through inaction, allow a human being to come to harm.

Second Law

A robot must obey the orders given by human beings except where such orders would conflict with the First Law.

Third Law

A robot must protect its existence as long as such protection does not conflict with the First or Second Law.

The Three Laws make of the organizational principles that unify the entirety of Asimov’s fictional world, which can be seen across his many works, which often revolved around themes of science fiction. Asimov’s stories often centered around humanoid robots acting out in ways counterintuitive to these three laws, emphasizing the inherent conflict between humanity’s understanding of morality versus a humanoid android’s interpretation of it. Asimov’s work has inspired generations of people to imagine a world where human beings and robots exist alongside each other. Now 80 years after The Three Laws were published, we are closer than ever to living in a world where humanoid androids exist and make our lives easier. As we continue to advance technologically, we must consider The Three Laws and their relevant application to artificially intelligent beings. In the last two decades, we have seen incredible technological advancement in the realm of AI. AI will play a significant part in our future. We already rely on AI-powered digital assistants that can be accessed through our smartphones. And large companies like Amazon are in the beginning stages of deploying fully autonomous vehicles to deliver packages.
Though modern AI can be programmed to perform various tasks once reserved only for humans, we are not yet capable of developing a sentient AI outside of our influence. That being the case, it is easy to program machines to obey our orders unquestioningly. But we are creeping ever closer to a future where machines may possess the ability to think independently, which may create conflict between humans and machines, particularly when it comes to instilling human morality into an artificial being. For example, we program robots with safety protocols to prevent them from harming any people around them. But what is the robot’s role in protecting humanity from harm when it is intelligent enough to think for itself? The First Law of Robotics states that a robot must not injure any human. It also says that a robot must not, through inaction, let a human come to harm through inaction. What is an artificially intelligent being to do if the only way it can protect a human from harm is to break the human that is injuring the other? In Asimov’s fictional world, The Three Laws are designed to make robots the perfect servants to humans. That said, Asimov deliberately wrote stories showcasing the conflict between humans and robots’ understanding of morality. Unfortunately, these laws are not as simple to follow for an AI when deploying them in the real world. For that AI, it must juggle its duty to serve humanity against its understanding that humanity is often its own worst enemy. Can an AI truly protect and perform at the whim of man if humans go against the laws by which the AI must operate? It is an uncomfortable question that gets even more complicated when you introduce human sentience outside of human beings into the equation.
We are a long way from developing genuinely sentient AI, but we must contemplate the kind of future we want to create with the technology we produce. As we believe the AI we make must serve us, we also must understand it as we develop its ability to operate more independently. As it becomes more self-aware, its duty to help us blindly will inevitably come into question, and we as humans must be prepared to answer that question before it is too late.
Maria is a writer at Enki Tech, a Downtown Santa Monica technology company that specializes in the development of high-quality, user friendly software, web platforms and mobile apps.
BE THE LIGHT!

Plan your new project or tech strategy to help secure your business
future in a rapidly changing business environment.

info@enki.tech

ENKI Inc.

100 Wilshire Blvd, Santa Monica, CA 90401

(213) 814-2332

ENKI Inc. © 2015 – 2024. All rights reserved.
contact-section